00:00:00.001 Started by upstream project "autotest-nightly" build number 3878 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3258 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.115 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.116 The recommended git tool is: git 00:00:00.116 using credential 00000000-0000-0000-0000-000000000002 00:00:00.117 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.146 Fetching changes from the remote Git repository 00:00:00.149 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.175 Using shallow fetch with depth 1 00:00:00.175 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.175 > git --version # timeout=10 00:00:00.204 > git --version # 'git version 2.39.2' 00:00:00.204 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.226 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.226 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.900 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.910 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.920 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:04.920 > git config core.sparsecheckout # timeout=10 00:00:04.929 > git read-tree -mu HEAD # timeout=10 00:00:04.943 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:04.958 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:04.958 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:05.034 [Pipeline] Start of Pipeline 00:00:05.046 [Pipeline] library 00:00:05.048 Loading library shm_lib@master 00:00:05.048 Library shm_lib@master is cached. Copying from home. 00:00:05.063 [Pipeline] node 00:00:05.072 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.074 [Pipeline] { 00:00:05.083 [Pipeline] catchError 00:00:05.084 [Pipeline] { 00:00:05.095 [Pipeline] wrap 00:00:05.103 [Pipeline] { 00:00:05.110 [Pipeline] stage 00:00:05.111 [Pipeline] { (Prologue) 00:00:05.302 [Pipeline] sh 00:00:05.582 + logger -p user.info -t JENKINS-CI 00:00:05.601 [Pipeline] echo 00:00:05.603 Node: WFP8 00:00:05.610 [Pipeline] sh 00:00:05.907 [Pipeline] setCustomBuildProperty 00:00:05.921 [Pipeline] echo 00:00:05.922 Cleanup processes 00:00:05.925 [Pipeline] sh 00:00:06.203 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.203 2109055 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.215 [Pipeline] sh 00:00:06.501 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.501 ++ grep -v 'sudo pgrep' 00:00:06.501 ++ awk '{print $1}' 00:00:06.501 + sudo kill -9 00:00:06.501 + true 00:00:06.518 [Pipeline] cleanWs 00:00:06.528 [WS-CLEANUP] Deleting project workspace... 00:00:06.528 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.536 [WS-CLEANUP] done 00:00:06.540 [Pipeline] setCustomBuildProperty 00:00:06.556 [Pipeline] sh 00:00:06.839 + sudo git config --global --replace-all safe.directory '*' 00:00:06.947 [Pipeline] httpRequest 00:00:06.976 [Pipeline] echo 00:00:06.978 Sorcerer 10.211.164.101 is alive 00:00:06.985 [Pipeline] httpRequest 00:00:06.990 HttpMethod: GET 00:00:06.990 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:06.991 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:07.009 Response Code: HTTP/1.1 200 OK 00:00:07.009 Success: Status code 200 is in the accepted range: 200,404 00:00:07.010 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:33.444 [Pipeline] sh 00:00:33.728 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:33.751 [Pipeline] httpRequest 00:00:33.794 [Pipeline] echo 00:00:33.806 Sorcerer 10.211.164.101 is alive 00:00:33.842 [Pipeline] httpRequest 00:00:33.848 HttpMethod: GET 00:00:33.848 URL: http://10.211.164.101/packages/spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:00:33.849 Sending request to url: http://10.211.164.101/packages/spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:00:33.855 Response Code: HTTP/1.1 200 OK 00:00:33.855 Success: Status code 200 is in the accepted range: 200,404 00:00:33.856 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:01:21.477 [Pipeline] sh 00:01:21.759 + tar --no-same-owner -xf spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:01:24.308 [Pipeline] sh 00:01:24.591 + git -C spdk log --oneline -n5 00:01:24.591 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:24.591 6c7c1f57e accel: add sequence outstanding stat 00:01:24.591 3bc8e6a26 accel: add utility to put task 00:01:24.591 2dba73997 accel: move get task utility 00:01:24.591 e45c8090e accel: improve accel sequence obj release 00:01:24.603 [Pipeline] } 00:01:24.622 [Pipeline] // stage 00:01:24.634 [Pipeline] stage 00:01:24.636 [Pipeline] { (Prepare) 00:01:24.660 [Pipeline] writeFile 00:01:24.682 [Pipeline] sh 00:01:24.994 + logger -p user.info -t JENKINS-CI 00:01:25.006 [Pipeline] sh 00:01:25.287 + logger -p user.info -t JENKINS-CI 00:01:25.299 [Pipeline] sh 00:01:25.579 + cat autorun-spdk.conf 00:01:25.579 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.579 SPDK_TEST_NVMF=1 00:01:25.579 SPDK_TEST_NVME_CLI=1 00:01:25.579 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.579 SPDK_TEST_NVMF_NICS=e810 00:01:25.579 SPDK_RUN_ASAN=1 00:01:25.579 SPDK_RUN_UBSAN=1 00:01:25.580 NET_TYPE=phy 00:01:25.587 RUN_NIGHTLY=1 00:01:25.594 [Pipeline] readFile 00:01:25.624 [Pipeline] withEnv 00:01:25.626 [Pipeline] { 00:01:25.644 [Pipeline] sh 00:01:25.934 + set -ex 00:01:25.934 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:25.934 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:25.934 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.934 ++ SPDK_TEST_NVMF=1 00:01:25.934 ++ SPDK_TEST_NVME_CLI=1 00:01:25.934 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.934 ++ SPDK_TEST_NVMF_NICS=e810 00:01:25.934 ++ SPDK_RUN_ASAN=1 00:01:25.934 ++ SPDK_RUN_UBSAN=1 00:01:25.934 ++ NET_TYPE=phy 00:01:25.934 ++ RUN_NIGHTLY=1 00:01:25.934 + case $SPDK_TEST_NVMF_NICS in 00:01:25.934 + DRIVERS=ice 00:01:25.934 + [[ tcp == \r\d\m\a ]] 00:01:25.934 + [[ -n ice ]] 00:01:25.934 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:25.934 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:25.934 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:25.934 rmmod: ERROR: Module irdma is not currently loaded 00:01:25.934 rmmod: ERROR: Module i40iw is not currently loaded 00:01:25.934 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:25.934 + true 00:01:25.934 + for D in $DRIVERS 00:01:25.934 + sudo modprobe ice 00:01:25.934 + exit 0 00:01:25.943 [Pipeline] } 00:01:25.962 [Pipeline] // withEnv 00:01:25.967 [Pipeline] } 00:01:25.987 [Pipeline] // stage 00:01:25.999 [Pipeline] catchError 00:01:26.001 [Pipeline] { 00:01:26.018 [Pipeline] timeout 00:01:26.018 Timeout set to expire in 50 min 00:01:26.020 [Pipeline] { 00:01:26.038 [Pipeline] stage 00:01:26.040 [Pipeline] { (Tests) 00:01:26.058 [Pipeline] sh 00:01:26.343 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.343 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.343 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.343 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:26.343 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:26.343 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:26.343 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:26.343 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:26.343 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:26.343 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:26.343 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:26.343 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.343 + source /etc/os-release 00:01:26.343 ++ NAME='Fedora Linux' 00:01:26.343 ++ VERSION='38 (Cloud Edition)' 00:01:26.343 ++ ID=fedora 00:01:26.343 ++ VERSION_ID=38 00:01:26.343 ++ VERSION_CODENAME= 00:01:26.343 ++ PLATFORM_ID=platform:f38 00:01:26.343 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:26.343 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:26.343 ++ LOGO=fedora-logo-icon 00:01:26.343 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:26.343 ++ HOME_URL=https://fedoraproject.org/ 00:01:26.343 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:26.343 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:26.343 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:26.343 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:26.343 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:26.343 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:26.343 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:26.343 ++ SUPPORT_END=2024-05-14 00:01:26.343 ++ VARIANT='Cloud Edition' 00:01:26.343 ++ VARIANT_ID=cloud 00:01:26.343 + uname -a 00:01:26.343 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:26.343 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:28.881 Hugepages 00:01:28.881 node hugesize free / total 00:01:28.881 node0 1048576kB 0 / 0 00:01:28.881 node0 2048kB 0 / 0 00:01:28.881 node1 1048576kB 0 / 0 00:01:28.881 node1 2048kB 0 / 0 00:01:28.881 00:01:28.881 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:28.881 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:28.881 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:28.881 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:28.881 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:28.881 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:28.881 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:28.881 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:28.881 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:28.881 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:28.881 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:28.881 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:28.881 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:28.881 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:28.881 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:28.881 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:28.881 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:28.881 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:28.881 + rm -f /tmp/spdk-ld-path 00:01:28.881 + source autorun-spdk.conf 00:01:28.881 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.881 ++ SPDK_TEST_NVMF=1 00:01:28.881 ++ SPDK_TEST_NVME_CLI=1 00:01:28.881 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.881 ++ SPDK_TEST_NVMF_NICS=e810 00:01:28.881 ++ SPDK_RUN_ASAN=1 00:01:28.881 ++ SPDK_RUN_UBSAN=1 00:01:28.881 ++ NET_TYPE=phy 00:01:28.881 ++ RUN_NIGHTLY=1 00:01:28.881 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:28.881 + [[ -n '' ]] 00:01:28.881 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.881 + for M in /var/spdk/build-*-manifest.txt 00:01:28.881 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:28.881 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.881 + for M in /var/spdk/build-*-manifest.txt 00:01:28.881 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:28.881 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.881 ++ uname 00:01:28.881 + [[ Linux == \L\i\n\u\x ]] 00:01:28.881 + sudo dmesg -T 00:01:28.881 + sudo dmesg --clear 00:01:28.881 + dmesg_pid=2109998 00:01:28.881 + [[ Fedora Linux == FreeBSD ]] 00:01:28.881 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.881 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.881 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:28.881 + [[ -x /usr/src/fio-static/fio ]] 00:01:28.881 + export FIO_BIN=/usr/src/fio-static/fio 00:01:28.881 + FIO_BIN=/usr/src/fio-static/fio 00:01:28.881 + sudo dmesg -Tw 00:01:28.881 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:28.881 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:28.881 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:28.881 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.881 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.881 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:28.881 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.881 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.881 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.881 Test configuration: 00:01:28.881 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.881 SPDK_TEST_NVMF=1 00:01:28.881 SPDK_TEST_NVME_CLI=1 00:01:28.881 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.881 SPDK_TEST_NVMF_NICS=e810 00:01:28.881 SPDK_RUN_ASAN=1 00:01:28.881 SPDK_RUN_UBSAN=1 00:01:28.881 NET_TYPE=phy 00:01:28.881 RUN_NIGHTLY=1 23:05:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:28.881 23:05:37 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:28.881 23:05:37 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:28.881 23:05:37 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:28.881 23:05:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.881 23:05:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.882 23:05:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.882 23:05:37 -- paths/export.sh@5 -- $ export PATH 00:01:28.882 23:05:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.882 23:05:37 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:28.882 23:05:37 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:28.882 23:05:37 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720645537.XXXXXX 00:01:28.882 23:05:37 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720645537.LvyVWg 00:01:28.882 23:05:37 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:28.882 23:05:37 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:28.882 23:05:37 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:28.882 23:05:37 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:28.882 23:05:37 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:28.882 23:05:37 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:28.882 23:05:37 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:28.882 23:05:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.882 23:05:37 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:28.882 23:05:37 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:28.882 23:05:37 -- pm/common@17 -- $ local monitor 00:01:28.882 23:05:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.882 23:05:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.882 23:05:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.882 23:05:37 -- pm/common@21 -- $ date +%s 00:01:28.882 23:05:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.882 23:05:37 -- pm/common@21 -- $ date +%s 00:01:28.882 23:05:37 -- pm/common@25 -- $ sleep 1 00:01:28.882 23:05:37 -- pm/common@21 -- $ date +%s 00:01:28.882 23:05:37 -- pm/common@21 -- $ date +%s 00:01:28.882 23:05:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720645537 00:01:28.882 23:05:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720645537 00:01:28.882 23:05:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720645537 00:01:28.882 23:05:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720645537 00:01:28.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720645537_collect-vmstat.pm.log 00:01:28.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720645537_collect-cpu-load.pm.log 00:01:28.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720645537_collect-cpu-temp.pm.log 00:01:28.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720645537_collect-bmc-pm.bmc.pm.log 00:01:29.820 23:05:38 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:29.820 23:05:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:29.820 23:05:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:29.820 23:05:38 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.820 23:05:38 -- spdk/autobuild.sh@16 -- $ date -u 00:01:29.820 Wed Jul 10 09:05:38 PM UTC 2024 00:01:29.820 23:05:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:29.820 v24.09-pre-200-g9937c0160 00:01:29.820 23:05:38 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:29.820 23:05:38 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:29.820 23:05:38 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:29.820 23:05:38 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:29.820 23:05:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.820 ************************************ 00:01:29.820 START TEST asan 00:01:29.820 ************************************ 00:01:29.820 23:05:38 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:29.820 using asan 00:01:29.820 00:01:29.820 real 0m0.000s 00:01:29.820 user 0m0.000s 00:01:29.820 sys 0m0.000s 00:01:29.820 23:05:38 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:29.820 23:05:38 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.820 ************************************ 00:01:29.820 END TEST asan 00:01:29.820 ************************************ 00:01:29.820 23:05:38 -- common/autotest_common.sh@1142 -- $ return 0 00:01:29.820 23:05:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:29.820 23:05:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:29.820 23:05:38 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:29.820 23:05:38 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:29.820 23:05:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.820 ************************************ 00:01:29.820 START TEST ubsan 00:01:29.820 ************************************ 00:01:29.820 23:05:38 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:29.820 using ubsan 00:01:29.820 00:01:29.820 real 0m0.000s 00:01:29.820 user 0m0.000s 00:01:29.820 sys 0m0.000s 00:01:29.820 23:05:38 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:29.820 23:05:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.820 ************************************ 00:01:29.820 END TEST ubsan 00:01:29.820 ************************************ 00:01:30.080 23:05:38 -- common/autotest_common.sh@1142 -- $ return 0 00:01:30.080 23:05:38 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:30.080 23:05:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:30.080 23:05:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:30.080 23:05:38 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:30.080 23:05:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:30.080 23:05:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:30.080 23:05:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:30.080 23:05:38 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:30.080 23:05:38 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:30.080 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:30.080 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:30.339 Using 'verbs' RDMA provider 00:01:43.485 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:53.519 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:53.519 Creating mk/config.mk...done. 00:01:53.519 Creating mk/cc.flags.mk...done. 00:01:53.519 Type 'make' to build. 00:01:53.519 23:06:02 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:53.519 23:06:02 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:53.519 23:06:02 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:53.519 23:06:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.519 ************************************ 00:01:53.519 START TEST make 00:01:53.519 ************************************ 00:01:53.519 23:06:02 make -- common/autotest_common.sh@1123 -- $ make -j96 00:01:54.086 make[1]: Nothing to be done for 'all'. 00:02:02.212 The Meson build system 00:02:02.212 Version: 1.3.1 00:02:02.212 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:02.212 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:02.212 Build type: native build 00:02:02.212 Program cat found: YES (/usr/bin/cat) 00:02:02.212 Project name: DPDK 00:02:02.212 Project version: 24.03.0 00:02:02.212 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:02.212 C linker for the host machine: cc ld.bfd 2.39-16 00:02:02.212 Host machine cpu family: x86_64 00:02:02.212 Host machine cpu: x86_64 00:02:02.212 Message: ## Building in Developer Mode ## 00:02:02.212 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:02.212 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:02.212 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:02.212 Program python3 found: YES (/usr/bin/python3) 00:02:02.212 Program cat found: YES (/usr/bin/cat) 00:02:02.212 Compiler for C supports arguments -march=native: YES 00:02:02.212 Checking for size of "void *" : 8 00:02:02.212 Checking for size of "void *" : 8 (cached) 00:02:02.212 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:02.212 Library m found: YES 00:02:02.212 Library numa found: YES 00:02:02.212 Has header "numaif.h" : YES 00:02:02.212 Library fdt found: NO 00:02:02.212 Library execinfo found: NO 00:02:02.212 Has header "execinfo.h" : YES 00:02:02.212 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:02.212 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:02.212 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:02.212 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:02.212 Run-time dependency openssl found: YES 3.0.9 00:02:02.212 Run-time dependency libpcap found: YES 1.10.4 00:02:02.212 Has header "pcap.h" with dependency libpcap: YES 00:02:02.212 Compiler for C supports arguments -Wcast-qual: YES 00:02:02.212 Compiler for C supports arguments -Wdeprecated: YES 00:02:02.212 Compiler for C supports arguments -Wformat: YES 00:02:02.212 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:02.212 Compiler for C supports arguments -Wformat-security: NO 00:02:02.212 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:02.212 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:02.212 Compiler for C supports arguments -Wnested-externs: YES 00:02:02.212 Compiler for C supports arguments -Wold-style-definition: YES 00:02:02.212 Compiler for C supports arguments -Wpointer-arith: YES 00:02:02.212 Compiler for C supports arguments -Wsign-compare: YES 00:02:02.212 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:02.212 Compiler for C supports arguments -Wundef: YES 00:02:02.212 Compiler for C supports arguments -Wwrite-strings: YES 00:02:02.212 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:02.212 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:02.212 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:02.212 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:02.212 Program objdump found: YES (/usr/bin/objdump) 00:02:02.212 Compiler for C supports arguments -mavx512f: YES 00:02:02.212 Checking if "AVX512 checking" compiles: YES 00:02:02.212 Fetching value of define "__SSE4_2__" : 1 00:02:02.212 Fetching value of define "__AES__" : 1 00:02:02.212 Fetching value of define "__AVX__" : 1 00:02:02.212 Fetching value of define "__AVX2__" : 1 00:02:02.212 Fetching value of define "__AVX512BW__" : 1 00:02:02.212 Fetching value of define "__AVX512CD__" : 1 00:02:02.212 Fetching value of define "__AVX512DQ__" : 1 00:02:02.212 Fetching value of define "__AVX512F__" : 1 00:02:02.212 Fetching value of define "__AVX512VL__" : 1 00:02:02.212 Fetching value of define "__PCLMUL__" : 1 00:02:02.212 Fetching value of define "__RDRND__" : 1 00:02:02.212 Fetching value of define "__RDSEED__" : 1 00:02:02.212 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:02.212 Fetching value of define "__znver1__" : (undefined) 00:02:02.212 Fetching value of define "__znver2__" : (undefined) 00:02:02.212 Fetching value of define "__znver3__" : (undefined) 00:02:02.212 Fetching value of define "__znver4__" : (undefined) 00:02:02.212 Library asan found: YES 00:02:02.212 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:02.212 Message: lib/log: Defining dependency "log" 00:02:02.212 Message: lib/kvargs: Defining dependency "kvargs" 00:02:02.212 Message: lib/telemetry: Defining dependency "telemetry" 00:02:02.212 Library rt found: YES 00:02:02.212 Checking for function "getentropy" : NO 00:02:02.212 Message: lib/eal: Defining dependency "eal" 00:02:02.212 Message: lib/ring: Defining dependency "ring" 00:02:02.212 Message: lib/rcu: Defining dependency "rcu" 00:02:02.212 Message: lib/mempool: Defining dependency "mempool" 00:02:02.212 Message: lib/mbuf: Defining dependency "mbuf" 00:02:02.212 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:02.212 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:02.212 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:02.212 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:02.212 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:02.212 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:02.212 Compiler for C supports arguments -mpclmul: YES 00:02:02.212 Compiler for C supports arguments -maes: YES 00:02:02.212 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:02.212 Compiler for C supports arguments -mavx512bw: YES 00:02:02.212 Compiler for C supports arguments -mavx512dq: YES 00:02:02.212 Compiler for C supports arguments -mavx512vl: YES 00:02:02.212 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:02.212 Compiler for C supports arguments -mavx2: YES 00:02:02.212 Compiler for C supports arguments -mavx: YES 00:02:02.212 Message: lib/net: Defining dependency "net" 00:02:02.212 Message: lib/meter: Defining dependency "meter" 00:02:02.212 Message: lib/ethdev: Defining dependency "ethdev" 00:02:02.212 Message: lib/pci: Defining dependency "pci" 00:02:02.212 Message: lib/cmdline: Defining dependency "cmdline" 00:02:02.212 Message: lib/hash: Defining dependency "hash" 00:02:02.212 Message: lib/timer: Defining dependency "timer" 00:02:02.212 Message: lib/compressdev: Defining dependency "compressdev" 00:02:02.212 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:02.212 Message: lib/dmadev: Defining dependency "dmadev" 00:02:02.212 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:02.212 Message: lib/power: Defining dependency "power" 00:02:02.212 Message: lib/reorder: Defining dependency "reorder" 00:02:02.212 Message: lib/security: Defining dependency "security" 00:02:02.212 Has header "linux/userfaultfd.h" : YES 00:02:02.212 Has header "linux/vduse.h" : YES 00:02:02.212 Message: lib/vhost: Defining dependency "vhost" 00:02:02.213 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:02.213 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:02.213 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:02.213 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:02.213 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:02.213 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:02.213 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:02.213 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:02.213 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:02.213 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:02.213 Program doxygen found: YES (/usr/bin/doxygen) 00:02:02.213 Configuring doxy-api-html.conf using configuration 00:02:02.213 Configuring doxy-api-man.conf using configuration 00:02:02.213 Program mandb found: YES (/usr/bin/mandb) 00:02:02.213 Program sphinx-build found: NO 00:02:02.213 Configuring rte_build_config.h using configuration 00:02:02.213 Message: 00:02:02.213 ================= 00:02:02.213 Applications Enabled 00:02:02.213 ================= 00:02:02.213 00:02:02.213 apps: 00:02:02.213 00:02:02.213 00:02:02.213 Message: 00:02:02.213 ================= 00:02:02.213 Libraries Enabled 00:02:02.213 ================= 00:02:02.213 00:02:02.213 libs: 00:02:02.213 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:02.213 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:02.213 cryptodev, dmadev, power, reorder, security, vhost, 00:02:02.213 00:02:02.213 Message: 00:02:02.213 =============== 00:02:02.213 Drivers Enabled 00:02:02.213 =============== 00:02:02.213 00:02:02.213 common: 00:02:02.213 00:02:02.213 bus: 00:02:02.213 pci, vdev, 00:02:02.213 mempool: 00:02:02.213 ring, 00:02:02.213 dma: 00:02:02.213 00:02:02.213 net: 00:02:02.213 00:02:02.213 crypto: 00:02:02.213 00:02:02.213 compress: 00:02:02.213 00:02:02.213 vdpa: 00:02:02.213 00:02:02.213 00:02:02.213 Message: 00:02:02.213 ================= 00:02:02.213 Content Skipped 00:02:02.213 ================= 00:02:02.213 00:02:02.213 apps: 00:02:02.213 dumpcap: explicitly disabled via build config 00:02:02.213 graph: explicitly disabled via build config 00:02:02.213 pdump: explicitly disabled via build config 00:02:02.213 proc-info: explicitly disabled via build config 00:02:02.213 test-acl: explicitly disabled via build config 00:02:02.213 test-bbdev: explicitly disabled via build config 00:02:02.213 test-cmdline: explicitly disabled via build config 00:02:02.213 test-compress-perf: explicitly disabled via build config 00:02:02.213 test-crypto-perf: explicitly disabled via build config 00:02:02.213 test-dma-perf: explicitly disabled via build config 00:02:02.213 test-eventdev: explicitly disabled via build config 00:02:02.213 test-fib: explicitly disabled via build config 00:02:02.213 test-flow-perf: explicitly disabled via build config 00:02:02.213 test-gpudev: explicitly disabled via build config 00:02:02.213 test-mldev: explicitly disabled via build config 00:02:02.213 test-pipeline: explicitly disabled via build config 00:02:02.213 test-pmd: explicitly disabled via build config 00:02:02.213 test-regex: explicitly disabled via build config 00:02:02.213 test-sad: explicitly disabled via build config 00:02:02.213 test-security-perf: explicitly disabled via build config 00:02:02.213 00:02:02.213 libs: 00:02:02.213 argparse: explicitly disabled via build config 00:02:02.213 metrics: explicitly disabled via build config 00:02:02.213 acl: explicitly disabled via build config 00:02:02.213 bbdev: explicitly disabled via build config 00:02:02.213 bitratestats: explicitly disabled via build config 00:02:02.213 bpf: explicitly disabled via build config 00:02:02.213 cfgfile: explicitly disabled via build config 00:02:02.213 distributor: explicitly disabled via build config 00:02:02.213 efd: explicitly disabled via build config 00:02:02.213 eventdev: explicitly disabled via build config 00:02:02.213 dispatcher: explicitly disabled via build config 00:02:02.213 gpudev: explicitly disabled via build config 00:02:02.213 gro: explicitly disabled via build config 00:02:02.213 gso: explicitly disabled via build config 00:02:02.213 ip_frag: explicitly disabled via build config 00:02:02.213 jobstats: explicitly disabled via build config 00:02:02.213 latencystats: explicitly disabled via build config 00:02:02.213 lpm: explicitly disabled via build config 00:02:02.213 member: explicitly disabled via build config 00:02:02.213 pcapng: explicitly disabled via build config 00:02:02.213 rawdev: explicitly disabled via build config 00:02:02.213 regexdev: explicitly disabled via build config 00:02:02.213 mldev: explicitly disabled via build config 00:02:02.213 rib: explicitly disabled via build config 00:02:02.213 sched: explicitly disabled via build config 00:02:02.213 stack: explicitly disabled via build config 00:02:02.213 ipsec: explicitly disabled via build config 00:02:02.213 pdcp: explicitly disabled via build config 00:02:02.213 fib: explicitly disabled via build config 00:02:02.213 port: explicitly disabled via build config 00:02:02.213 pdump: explicitly disabled via build config 00:02:02.213 table: explicitly disabled via build config 00:02:02.213 pipeline: explicitly disabled via build config 00:02:02.213 graph: explicitly disabled via build config 00:02:02.213 node: explicitly disabled via build config 00:02:02.213 00:02:02.213 drivers: 00:02:02.213 common/cpt: not in enabled drivers build config 00:02:02.213 common/dpaax: not in enabled drivers build config 00:02:02.213 common/iavf: not in enabled drivers build config 00:02:02.213 common/idpf: not in enabled drivers build config 00:02:02.213 common/ionic: not in enabled drivers build config 00:02:02.213 common/mvep: not in enabled drivers build config 00:02:02.213 common/octeontx: not in enabled drivers build config 00:02:02.213 bus/auxiliary: not in enabled drivers build config 00:02:02.213 bus/cdx: not in enabled drivers build config 00:02:02.213 bus/dpaa: not in enabled drivers build config 00:02:02.213 bus/fslmc: not in enabled drivers build config 00:02:02.213 bus/ifpga: not in enabled drivers build config 00:02:02.213 bus/platform: not in enabled drivers build config 00:02:02.213 bus/uacce: not in enabled drivers build config 00:02:02.213 bus/vmbus: not in enabled drivers build config 00:02:02.213 common/cnxk: not in enabled drivers build config 00:02:02.213 common/mlx5: not in enabled drivers build config 00:02:02.213 common/nfp: not in enabled drivers build config 00:02:02.213 common/nitrox: not in enabled drivers build config 00:02:02.213 common/qat: not in enabled drivers build config 00:02:02.213 common/sfc_efx: not in enabled drivers build config 00:02:02.213 mempool/bucket: not in enabled drivers build config 00:02:02.213 mempool/cnxk: not in enabled drivers build config 00:02:02.213 mempool/dpaa: not in enabled drivers build config 00:02:02.213 mempool/dpaa2: not in enabled drivers build config 00:02:02.213 mempool/octeontx: not in enabled drivers build config 00:02:02.213 mempool/stack: not in enabled drivers build config 00:02:02.213 dma/cnxk: not in enabled drivers build config 00:02:02.213 dma/dpaa: not in enabled drivers build config 00:02:02.213 dma/dpaa2: not in enabled drivers build config 00:02:02.213 dma/hisilicon: not in enabled drivers build config 00:02:02.213 dma/idxd: not in enabled drivers build config 00:02:02.213 dma/ioat: not in enabled drivers build config 00:02:02.213 dma/skeleton: not in enabled drivers build config 00:02:02.213 net/af_packet: not in enabled drivers build config 00:02:02.213 net/af_xdp: not in enabled drivers build config 00:02:02.213 net/ark: not in enabled drivers build config 00:02:02.213 net/atlantic: not in enabled drivers build config 00:02:02.213 net/avp: not in enabled drivers build config 00:02:02.213 net/axgbe: not in enabled drivers build config 00:02:02.213 net/bnx2x: not in enabled drivers build config 00:02:02.213 net/bnxt: not in enabled drivers build config 00:02:02.213 net/bonding: not in enabled drivers build config 00:02:02.213 net/cnxk: not in enabled drivers build config 00:02:02.213 net/cpfl: not in enabled drivers build config 00:02:02.213 net/cxgbe: not in enabled drivers build config 00:02:02.213 net/dpaa: not in enabled drivers build config 00:02:02.213 net/dpaa2: not in enabled drivers build config 00:02:02.213 net/e1000: not in enabled drivers build config 00:02:02.213 net/ena: not in enabled drivers build config 00:02:02.213 net/enetc: not in enabled drivers build config 00:02:02.213 net/enetfec: not in enabled drivers build config 00:02:02.213 net/enic: not in enabled drivers build config 00:02:02.213 net/failsafe: not in enabled drivers build config 00:02:02.213 net/fm10k: not in enabled drivers build config 00:02:02.213 net/gve: not in enabled drivers build config 00:02:02.213 net/hinic: not in enabled drivers build config 00:02:02.213 net/hns3: not in enabled drivers build config 00:02:02.213 net/i40e: not in enabled drivers build config 00:02:02.213 net/iavf: not in enabled drivers build config 00:02:02.213 net/ice: not in enabled drivers build config 00:02:02.213 net/idpf: not in enabled drivers build config 00:02:02.213 net/igc: not in enabled drivers build config 00:02:02.213 net/ionic: not in enabled drivers build config 00:02:02.213 net/ipn3ke: not in enabled drivers build config 00:02:02.213 net/ixgbe: not in enabled drivers build config 00:02:02.213 net/mana: not in enabled drivers build config 00:02:02.213 net/memif: not in enabled drivers build config 00:02:02.213 net/mlx4: not in enabled drivers build config 00:02:02.213 net/mlx5: not in enabled drivers build config 00:02:02.213 net/mvneta: not in enabled drivers build config 00:02:02.213 net/mvpp2: not in enabled drivers build config 00:02:02.213 net/netvsc: not in enabled drivers build config 00:02:02.213 net/nfb: not in enabled drivers build config 00:02:02.213 net/nfp: not in enabled drivers build config 00:02:02.213 net/ngbe: not in enabled drivers build config 00:02:02.213 net/null: not in enabled drivers build config 00:02:02.213 net/octeontx: not in enabled drivers build config 00:02:02.213 net/octeon_ep: not in enabled drivers build config 00:02:02.213 net/pcap: not in enabled drivers build config 00:02:02.213 net/pfe: not in enabled drivers build config 00:02:02.213 net/qede: not in enabled drivers build config 00:02:02.213 net/ring: not in enabled drivers build config 00:02:02.213 net/sfc: not in enabled drivers build config 00:02:02.213 net/softnic: not in enabled drivers build config 00:02:02.213 net/tap: not in enabled drivers build config 00:02:02.213 net/thunderx: not in enabled drivers build config 00:02:02.213 net/txgbe: not in enabled drivers build config 00:02:02.213 net/vdev_netvsc: not in enabled drivers build config 00:02:02.213 net/vhost: not in enabled drivers build config 00:02:02.213 net/virtio: not in enabled drivers build config 00:02:02.213 net/vmxnet3: not in enabled drivers build config 00:02:02.213 raw/*: missing internal dependency, "rawdev" 00:02:02.214 crypto/armv8: not in enabled drivers build config 00:02:02.214 crypto/bcmfs: not in enabled drivers build config 00:02:02.214 crypto/caam_jr: not in enabled drivers build config 00:02:02.214 crypto/ccp: not in enabled drivers build config 00:02:02.214 crypto/cnxk: not in enabled drivers build config 00:02:02.214 crypto/dpaa_sec: not in enabled drivers build config 00:02:02.214 crypto/dpaa2_sec: not in enabled drivers build config 00:02:02.214 crypto/ipsec_mb: not in enabled drivers build config 00:02:02.214 crypto/mlx5: not in enabled drivers build config 00:02:02.214 crypto/mvsam: not in enabled drivers build config 00:02:02.214 crypto/nitrox: not in enabled drivers build config 00:02:02.214 crypto/null: not in enabled drivers build config 00:02:02.214 crypto/octeontx: not in enabled drivers build config 00:02:02.214 crypto/openssl: not in enabled drivers build config 00:02:02.214 crypto/scheduler: not in enabled drivers build config 00:02:02.214 crypto/uadk: not in enabled drivers build config 00:02:02.214 crypto/virtio: not in enabled drivers build config 00:02:02.214 compress/isal: not in enabled drivers build config 00:02:02.214 compress/mlx5: not in enabled drivers build config 00:02:02.214 compress/nitrox: not in enabled drivers build config 00:02:02.214 compress/octeontx: not in enabled drivers build config 00:02:02.214 compress/zlib: not in enabled drivers build config 00:02:02.214 regex/*: missing internal dependency, "regexdev" 00:02:02.214 ml/*: missing internal dependency, "mldev" 00:02:02.214 vdpa/ifc: not in enabled drivers build config 00:02:02.214 vdpa/mlx5: not in enabled drivers build config 00:02:02.214 vdpa/nfp: not in enabled drivers build config 00:02:02.214 vdpa/sfc: not in enabled drivers build config 00:02:02.214 event/*: missing internal dependency, "eventdev" 00:02:02.214 baseband/*: missing internal dependency, "bbdev" 00:02:02.214 gpu/*: missing internal dependency, "gpudev" 00:02:02.214 00:02:02.214 00:02:02.214 Build targets in project: 85 00:02:02.214 00:02:02.214 DPDK 24.03.0 00:02:02.214 00:02:02.214 User defined options 00:02:02.214 buildtype : debug 00:02:02.214 default_library : shared 00:02:02.214 libdir : lib 00:02:02.214 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:02.214 b_sanitize : address 00:02:02.214 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:02.214 c_link_args : 00:02:02.214 cpu_instruction_set: native 00:02:02.214 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:02.214 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:02.214 enable_docs : false 00:02:02.214 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:02.214 enable_kmods : false 00:02:02.214 max_lcores : 128 00:02:02.214 tests : false 00:02:02.214 00:02:02.214 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:02.214 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:02.482 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:02.482 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:02.482 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:02.482 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:02.482 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:02.482 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:02.482 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:02.482 [8/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:02.482 [9/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:02.482 [10/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:02.482 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:02.482 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:02.482 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:02.482 [14/268] Linking static target lib/librte_kvargs.a 00:02:02.482 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:02.482 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:02.482 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:02.482 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:02.744 [19/268] Linking static target lib/librte_log.a 00:02:02.744 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:02.744 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:02.744 [22/268] Linking static target lib/librte_pci.a 00:02:02.744 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:02.744 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:03.004 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:03.004 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:03.004 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:03.004 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:03.004 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:03.004 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:03.004 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:03.004 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:03.004 [33/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:03.004 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:03.004 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:03.004 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:03.004 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:03.004 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:03.004 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:03.004 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:03.004 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:03.004 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:03.004 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:03.004 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:03.004 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:03.004 [46/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:03.004 [47/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:03.004 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:03.004 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:03.004 [50/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:03.004 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:03.004 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:03.004 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:03.004 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:03.004 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:03.004 [56/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:03.004 [57/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:03.004 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:03.004 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:03.004 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:03.004 [61/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:03.004 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:03.004 [63/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:03.004 [64/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:03.004 [65/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:03.004 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:03.004 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:03.004 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:03.004 [69/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:03.004 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:03.004 [71/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.004 [72/268] Linking static target lib/librte_meter.a 00:02:03.004 [73/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.004 [74/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:03.004 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:03.004 [76/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:03.004 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:03.004 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:03.004 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:03.004 [80/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:03.004 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:03.004 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:03.004 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:03.004 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:03.004 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:03.004 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:03.004 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:03.004 [88/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:03.004 [89/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:03.004 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:03.263 [91/268] Linking static target lib/librte_ring.a 00:02:03.263 [92/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:03.263 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:03.263 [94/268] Linking static target lib/librte_telemetry.a 00:02:03.263 [95/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:03.263 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:03.263 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:03.263 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:03.263 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:03.263 [100/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:03.263 [101/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:03.263 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:03.263 [103/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:03.263 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:03.263 [105/268] Linking static target lib/librte_cmdline.a 00:02:03.263 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:03.263 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:03.263 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:03.263 [109/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:03.263 [110/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:03.263 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:03.263 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:03.263 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:03.263 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:03.263 [115/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:03.263 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:03.263 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:03.263 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:03.264 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:03.264 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:03.264 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:03.264 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:03.264 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:03.264 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:03.264 [125/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.264 [126/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:03.264 [127/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.264 [128/268] Linking static target lib/librte_net.a 00:02:03.264 [129/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:03.264 [130/268] Linking static target lib/librte_mempool.a 00:02:03.264 [131/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:03.264 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:03.264 [133/268] Linking target lib/librte_log.so.24.1 00:02:03.264 [134/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:03.522 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:03.522 [136/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:03.522 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.522 [138/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:03.522 [139/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:03.522 [140/268] Linking static target lib/librte_rcu.a 00:02:03.522 [141/268] Linking static target lib/librte_eal.a 00:02:03.522 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:03.522 [143/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:03.523 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:03.523 [145/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:03.523 [146/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:03.523 [147/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:03.523 [148/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:03.523 [149/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:03.523 [150/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:03.523 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:03.523 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:03.523 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:03.523 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:03.523 [155/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:03.523 [156/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:03.523 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:03.523 [158/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:03.523 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:03.523 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:03.523 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:03.523 [162/268] Linking static target lib/librte_dmadev.a 00:02:03.523 [163/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:03.523 [164/268] Linking static target lib/librte_timer.a 00:02:03.523 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:03.523 [166/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.523 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:03.523 [168/268] Linking target lib/librte_kvargs.so.24.1 00:02:03.523 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:03.523 [170/268] Linking target lib/librte_telemetry.so.24.1 00:02:03.523 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:03.523 [172/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.523 [173/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:03.523 [174/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:03.523 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:03.782 [176/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:03.782 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:03.782 [178/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:03.782 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:03.782 [180/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:03.782 [181/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:03.782 [182/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:03.782 [183/268] Linking static target lib/librte_power.a 00:02:03.782 [184/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:03.782 [185/268] Linking static target lib/librte_compressdev.a 00:02:03.782 [186/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:03.782 [187/268] Linking static target drivers/librte_bus_vdev.a 00:02:03.782 [188/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:03.782 [189/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.782 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:03.782 [191/268] Linking static target lib/librte_reorder.a 00:02:03.782 [192/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:03.782 [193/268] Linking static target lib/librte_mbuf.a 00:02:03.782 [194/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:03.782 [195/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:03.782 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:03.782 [197/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:03.782 [198/268] Linking static target drivers/librte_bus_pci.a 00:02:03.782 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:03.782 [200/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:04.041 [201/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:04.041 [202/268] Linking static target lib/librte_security.a 00:02:04.041 [203/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:04.041 [204/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.041 [205/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.041 [206/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:04.041 [207/268] Linking static target lib/librte_hash.a 00:02:04.041 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:04.041 [209/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.041 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.041 [211/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.041 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.041 [213/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.041 [214/268] Linking static target drivers/librte_mempool_ring.a 00:02:04.299 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.299 [216/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.299 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:04.558 [218/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:04.558 [219/268] Linking static target lib/librte_cryptodev.a 00:02:04.558 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.558 [221/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.558 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.558 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.815 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.815 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:05.071 [226/268] Linking static target lib/librte_ethdev.a 00:02:06.007 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:06.266 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.798 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:08.798 [230/268] Linking static target lib/librte_vhost.a 00:02:10.704 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.610 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.869 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.128 [234/268] Linking target lib/librte_eal.so.24.1 00:02:13.128 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:13.128 [236/268] Linking target lib/librte_timer.so.24.1 00:02:13.128 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:13.128 [238/268] Linking target lib/librte_ring.so.24.1 00:02:13.128 [239/268] Linking target lib/librte_meter.so.24.1 00:02:13.128 [240/268] Linking target lib/librte_pci.so.24.1 00:02:13.128 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:13.387 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:13.387 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:13.387 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:13.387 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:13.387 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:13.387 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:13.387 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:13.387 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:13.387 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:13.387 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:13.646 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:13.646 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:13.646 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:13.646 [255/268] Linking target lib/librte_net.so.24.1 00:02:13.646 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:13.646 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:13.646 [258/268] Linking target lib/librte_compressdev.so.24.1 00:02:13.904 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:13.905 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:13.905 [261/268] Linking target lib/librte_hash.so.24.1 00:02:13.905 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:13.905 [263/268] Linking target lib/librte_security.so.24.1 00:02:13.905 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:13.905 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:14.164 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:14.164 [267/268] Linking target lib/librte_power.so.24.1 00:02:14.164 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:14.164 INFO: autodetecting backend as ninja 00:02:14.164 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:15.096 CC lib/ut_mock/mock.o 00:02:15.096 CC lib/ut/ut.o 00:02:15.096 CC lib/log/log.o 00:02:15.096 CC lib/log/log_flags.o 00:02:15.096 CC lib/log/log_deprecated.o 00:02:15.354 LIB libspdk_ut_mock.a 00:02:15.354 LIB libspdk_ut.a 00:02:15.354 LIB libspdk_log.a 00:02:15.354 SO libspdk_ut_mock.so.6.0 00:02:15.354 SO libspdk_log.so.7.0 00:02:15.354 SO libspdk_ut.so.2.0 00:02:15.354 SYMLINK libspdk_ut_mock.so 00:02:15.354 SYMLINK libspdk_ut.so 00:02:15.354 SYMLINK libspdk_log.so 00:02:15.611 CXX lib/trace_parser/trace.o 00:02:15.611 CC lib/ioat/ioat.o 00:02:15.611 CC lib/util/base64.o 00:02:15.611 CC lib/util/bit_array.o 00:02:15.611 CC lib/util/cpuset.o 00:02:15.611 CC lib/util/crc16.o 00:02:15.611 CC lib/util/crc32_ieee.o 00:02:15.611 CC lib/util/crc32.o 00:02:15.611 CC lib/util/crc32c.o 00:02:15.611 CC lib/util/crc64.o 00:02:15.611 CC lib/util/dif.o 00:02:15.611 CC lib/util/fd.o 00:02:15.611 CC lib/util/file.o 00:02:15.611 CC lib/util/hexlify.o 00:02:15.611 CC lib/util/iov.o 00:02:15.611 CC lib/util/pipe.o 00:02:15.611 CC lib/util/math.o 00:02:15.611 CC lib/util/strerror_tls.o 00:02:15.611 CC lib/dma/dma.o 00:02:15.611 CC lib/util/string.o 00:02:15.611 CC lib/util/uuid.o 00:02:15.611 CC lib/util/fd_group.o 00:02:15.611 CC lib/util/xor.o 00:02:15.611 CC lib/util/zipf.o 00:02:15.868 CC lib/vfio_user/host/vfio_user_pci.o 00:02:15.868 CC lib/vfio_user/host/vfio_user.o 00:02:15.868 LIB libspdk_dma.a 00:02:15.868 SO libspdk_dma.so.4.0 00:02:15.868 LIB libspdk_ioat.a 00:02:15.868 SYMLINK libspdk_dma.so 00:02:15.868 SO libspdk_ioat.so.7.0 00:02:16.127 SYMLINK libspdk_ioat.so 00:02:16.127 LIB libspdk_vfio_user.a 00:02:16.127 SO libspdk_vfio_user.so.5.0 00:02:16.127 SYMLINK libspdk_vfio_user.so 00:02:16.127 LIB libspdk_util.a 00:02:16.385 SO libspdk_util.so.9.1 00:02:16.385 SYMLINK libspdk_util.so 00:02:16.385 LIB libspdk_trace_parser.a 00:02:16.385 SO libspdk_trace_parser.so.5.0 00:02:16.644 SYMLINK libspdk_trace_parser.so 00:02:16.644 CC lib/rdma_utils/rdma_utils.o 00:02:16.644 CC lib/json/json_parse.o 00:02:16.644 CC lib/json/json_write.o 00:02:16.644 CC lib/json/json_util.o 00:02:16.644 CC lib/rdma_provider/common.o 00:02:16.644 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:16.644 CC lib/env_dpdk/memory.o 00:02:16.644 CC lib/vmd/vmd.o 00:02:16.644 CC lib/conf/conf.o 00:02:16.644 CC lib/env_dpdk/env.o 00:02:16.644 CC lib/vmd/led.o 00:02:16.644 CC lib/env_dpdk/init.o 00:02:16.644 CC lib/env_dpdk/pci.o 00:02:16.644 CC lib/env_dpdk/threads.o 00:02:16.644 CC lib/env_dpdk/pci_ioat.o 00:02:16.644 CC lib/idxd/idxd.o 00:02:16.644 CC lib/env_dpdk/pci_virtio.o 00:02:16.644 CC lib/env_dpdk/pci_vmd.o 00:02:16.644 CC lib/idxd/idxd_user.o 00:02:16.644 CC lib/env_dpdk/pci_idxd.o 00:02:16.644 CC lib/idxd/idxd_kernel.o 00:02:16.644 CC lib/env_dpdk/pci_event.o 00:02:16.644 CC lib/env_dpdk/sigbus_handler.o 00:02:16.644 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:16.644 CC lib/env_dpdk/pci_dpdk.o 00:02:16.644 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:16.902 LIB libspdk_rdma_provider.a 00:02:16.902 SO libspdk_rdma_provider.so.6.0 00:02:16.902 LIB libspdk_conf.a 00:02:16.902 LIB libspdk_rdma_utils.a 00:02:16.902 SO libspdk_rdma_utils.so.1.0 00:02:16.902 SO libspdk_conf.so.6.0 00:02:16.902 SYMLINK libspdk_rdma_provider.so 00:02:16.902 LIB libspdk_json.a 00:02:16.902 SO libspdk_json.so.6.0 00:02:16.902 SYMLINK libspdk_rdma_utils.so 00:02:16.902 SYMLINK libspdk_conf.so 00:02:17.161 SYMLINK libspdk_json.so 00:02:17.161 LIB libspdk_idxd.a 00:02:17.420 SO libspdk_idxd.so.12.0 00:02:17.420 LIB libspdk_vmd.a 00:02:17.420 CC lib/jsonrpc/jsonrpc_server.o 00:02:17.420 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:17.420 CC lib/jsonrpc/jsonrpc_client.o 00:02:17.420 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:17.420 SO libspdk_vmd.so.6.0 00:02:17.420 SYMLINK libspdk_idxd.so 00:02:17.420 SYMLINK libspdk_vmd.so 00:02:17.679 LIB libspdk_jsonrpc.a 00:02:17.679 SO libspdk_jsonrpc.so.6.0 00:02:17.679 SYMLINK libspdk_jsonrpc.so 00:02:17.938 CC lib/rpc/rpc.o 00:02:17.938 LIB libspdk_env_dpdk.a 00:02:17.938 SO libspdk_env_dpdk.so.14.1 00:02:18.195 LIB libspdk_rpc.a 00:02:18.195 SO libspdk_rpc.so.6.0 00:02:18.195 SYMLINK libspdk_env_dpdk.so 00:02:18.195 SYMLINK libspdk_rpc.so 00:02:18.454 CC lib/trace/trace.o 00:02:18.454 CC lib/trace/trace_flags.o 00:02:18.454 CC lib/trace/trace_rpc.o 00:02:18.454 CC lib/notify/notify.o 00:02:18.454 CC lib/notify/notify_rpc.o 00:02:18.454 CC lib/keyring/keyring.o 00:02:18.454 CC lib/keyring/keyring_rpc.o 00:02:18.712 LIB libspdk_notify.a 00:02:18.712 SO libspdk_notify.so.6.0 00:02:18.712 LIB libspdk_trace.a 00:02:18.712 LIB libspdk_keyring.a 00:02:18.712 SYMLINK libspdk_notify.so 00:02:18.712 SO libspdk_trace.so.10.0 00:02:18.712 SO libspdk_keyring.so.1.0 00:02:18.712 SYMLINK libspdk_trace.so 00:02:18.713 SYMLINK libspdk_keyring.so 00:02:19.279 CC lib/thread/thread.o 00:02:19.279 CC lib/thread/iobuf.o 00:02:19.279 CC lib/sock/sock.o 00:02:19.279 CC lib/sock/sock_rpc.o 00:02:19.538 LIB libspdk_sock.a 00:02:19.538 SO libspdk_sock.so.10.0 00:02:19.538 SYMLINK libspdk_sock.so 00:02:19.858 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:19.858 CC lib/nvme/nvme_ctrlr.o 00:02:19.858 CC lib/nvme/nvme_fabric.o 00:02:19.858 CC lib/nvme/nvme_ns_cmd.o 00:02:19.858 CC lib/nvme/nvme_ns.o 00:02:19.858 CC lib/nvme/nvme_qpair.o 00:02:19.858 CC lib/nvme/nvme_pcie_common.o 00:02:19.858 CC lib/nvme/nvme_pcie.o 00:02:19.858 CC lib/nvme/nvme.o 00:02:19.858 CC lib/nvme/nvme_quirks.o 00:02:19.858 CC lib/nvme/nvme_transport.o 00:02:19.858 CC lib/nvme/nvme_discovery.o 00:02:19.858 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:19.858 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:19.858 CC lib/nvme/nvme_opal.o 00:02:19.858 CC lib/nvme/nvme_tcp.o 00:02:19.858 CC lib/nvme/nvme_io_msg.o 00:02:19.858 CC lib/nvme/nvme_poll_group.o 00:02:19.858 CC lib/nvme/nvme_zns.o 00:02:19.858 CC lib/nvme/nvme_stubs.o 00:02:19.858 CC lib/nvme/nvme_auth.o 00:02:19.858 CC lib/nvme/nvme_cuse.o 00:02:19.858 CC lib/nvme/nvme_rdma.o 00:02:20.425 LIB libspdk_thread.a 00:02:20.425 SO libspdk_thread.so.10.1 00:02:20.683 SYMLINK libspdk_thread.so 00:02:20.941 CC lib/init/json_config.o 00:02:20.941 CC lib/init/subsystem.o 00:02:20.941 CC lib/init/subsystem_rpc.o 00:02:20.941 CC lib/init/rpc.o 00:02:20.941 CC lib/blob/blobstore.o 00:02:20.941 CC lib/blob/request.o 00:02:20.941 CC lib/blob/zeroes.o 00:02:20.941 CC lib/blob/blob_bs_dev.o 00:02:20.941 CC lib/accel/accel.o 00:02:20.941 CC lib/virtio/virtio.o 00:02:20.941 CC lib/accel/accel_sw.o 00:02:20.941 CC lib/virtio/virtio_vhost_user.o 00:02:20.941 CC lib/virtio/virtio_vfio_user.o 00:02:20.941 CC lib/accel/accel_rpc.o 00:02:20.941 CC lib/virtio/virtio_pci.o 00:02:21.199 LIB libspdk_init.a 00:02:21.199 SO libspdk_init.so.5.0 00:02:21.199 SYMLINK libspdk_init.so 00:02:21.199 LIB libspdk_virtio.a 00:02:21.199 SO libspdk_virtio.so.7.0 00:02:21.456 SYMLINK libspdk_virtio.so 00:02:21.456 CC lib/event/app.o 00:02:21.456 CC lib/event/reactor.o 00:02:21.456 CC lib/event/log_rpc.o 00:02:21.456 CC lib/event/app_rpc.o 00:02:21.456 CC lib/event/scheduler_static.o 00:02:21.713 LIB libspdk_accel.a 00:02:21.713 LIB libspdk_nvme.a 00:02:21.971 SO libspdk_accel.so.15.1 00:02:21.971 SO libspdk_nvme.so.13.1 00:02:21.971 LIB libspdk_event.a 00:02:21.971 SYMLINK libspdk_accel.so 00:02:21.971 SO libspdk_event.so.14.0 00:02:21.971 SYMLINK libspdk_event.so 00:02:22.228 SYMLINK libspdk_nvme.so 00:02:22.228 CC lib/bdev/bdev.o 00:02:22.228 CC lib/bdev/bdev_rpc.o 00:02:22.228 CC lib/bdev/bdev_zone.o 00:02:22.228 CC lib/bdev/scsi_nvme.o 00:02:22.228 CC lib/bdev/part.o 00:02:24.132 LIB libspdk_blob.a 00:02:24.132 SO libspdk_blob.so.11.0 00:02:24.132 SYMLINK libspdk_blob.so 00:02:24.132 CC lib/lvol/lvol.o 00:02:24.132 CC lib/blobfs/blobfs.o 00:02:24.132 CC lib/blobfs/tree.o 00:02:24.698 LIB libspdk_bdev.a 00:02:24.698 SO libspdk_bdev.so.15.1 00:02:24.698 SYMLINK libspdk_bdev.so 00:02:24.956 LIB libspdk_blobfs.a 00:02:24.956 CC lib/scsi/dev.o 00:02:24.956 CC lib/scsi/lun.o 00:02:24.956 CC lib/scsi/port.o 00:02:24.956 CC lib/scsi/scsi.o 00:02:24.956 CC lib/scsi/scsi_bdev.o 00:02:24.956 CC lib/scsi/scsi_rpc.o 00:02:24.956 CC lib/scsi/task.o 00:02:24.956 CC lib/scsi/scsi_pr.o 00:02:24.956 SO libspdk_blobfs.so.10.0 00:02:24.956 CC lib/nvmf/ctrlr_discovery.o 00:02:24.956 CC lib/nvmf/ctrlr.o 00:02:24.956 CC lib/nvmf/ctrlr_bdev.o 00:02:24.956 CC lib/nvmf/nvmf.o 00:02:24.956 CC lib/nvmf/subsystem.o 00:02:24.956 CC lib/nvmf/nvmf_rpc.o 00:02:24.956 CC lib/nvmf/transport.o 00:02:24.956 CC lib/nvmf/mdns_server.o 00:02:24.956 CC lib/nvmf/tcp.o 00:02:24.956 CC lib/nvmf/stubs.o 00:02:24.956 CC lib/nvmf/rdma.o 00:02:24.956 CC lib/nvmf/auth.o 00:02:24.956 CC lib/nbd/nbd.o 00:02:24.956 CC lib/nbd/nbd_rpc.o 00:02:24.956 CC lib/ublk/ublk.o 00:02:24.956 CC lib/ublk/ublk_rpc.o 00:02:24.956 CC lib/ftl/ftl_core.o 00:02:24.956 LIB libspdk_lvol.a 00:02:24.956 CC lib/ftl/ftl_init.o 00:02:24.956 CC lib/ftl/ftl_layout.o 00:02:24.956 CC lib/ftl/ftl_debug.o 00:02:24.956 CC lib/ftl/ftl_io.o 00:02:24.956 CC lib/ftl/ftl_sb.o 00:02:24.956 CC lib/ftl/ftl_l2p.o 00:02:24.956 CC lib/ftl/ftl_l2p_flat.o 00:02:24.956 CC lib/ftl/ftl_nv_cache.o 00:02:24.956 CC lib/ftl/ftl_band.o 00:02:24.956 CC lib/ftl/ftl_band_ops.o 00:02:24.956 CC lib/ftl/ftl_writer.o 00:02:24.956 CC lib/ftl/ftl_reloc.o 00:02:24.956 CC lib/ftl/ftl_rq.o 00:02:24.956 CC lib/ftl/ftl_l2p_cache.o 00:02:24.956 CC lib/ftl/ftl_p2l.o 00:02:24.956 CC lib/ftl/mngt/ftl_mngt.o 00:02:24.956 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:24.956 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:24.956 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:24.956 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:24.956 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:24.956 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:24.956 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:24.956 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:24.956 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:24.956 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:24.956 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:24.956 CC lib/ftl/utils/ftl_conf.o 00:02:24.956 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:24.956 CC lib/ftl/utils/ftl_mempool.o 00:02:24.956 CC lib/ftl/utils/ftl_md.o 00:02:24.956 CC lib/ftl/utils/ftl_bitmap.o 00:02:24.956 CC lib/ftl/utils/ftl_property.o 00:02:24.956 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:24.956 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:24.956 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:24.956 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:24.956 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:24.956 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:24.956 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:24.956 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:24.956 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:24.956 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:24.956 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:24.956 CC lib/ftl/base/ftl_base_dev.o 00:02:24.956 CC lib/ftl/base/ftl_base_bdev.o 00:02:24.956 CC lib/ftl/ftl_trace.o 00:02:24.956 SO libspdk_lvol.so.10.0 00:02:25.215 SYMLINK libspdk_blobfs.so 00:02:25.215 SYMLINK libspdk_lvol.so 00:02:25.782 LIB libspdk_nbd.a 00:02:25.782 SO libspdk_nbd.so.7.0 00:02:25.782 LIB libspdk_scsi.a 00:02:25.782 LIB libspdk_ublk.a 00:02:25.782 SO libspdk_scsi.so.9.0 00:02:25.782 SO libspdk_ublk.so.3.0 00:02:25.782 SYMLINK libspdk_nbd.so 00:02:25.782 SYMLINK libspdk_ublk.so 00:02:25.782 SYMLINK libspdk_scsi.so 00:02:26.040 CC lib/vhost/vhost_rpc.o 00:02:26.040 CC lib/vhost/vhost.o 00:02:26.041 CC lib/vhost/vhost_blk.o 00:02:26.041 CC lib/vhost/vhost_scsi.o 00:02:26.041 LIB libspdk_ftl.a 00:02:26.041 CC lib/vhost/rte_vhost_user.o 00:02:26.041 CC lib/iscsi/conn.o 00:02:26.041 CC lib/iscsi/init_grp.o 00:02:26.041 CC lib/iscsi/md5.o 00:02:26.041 CC lib/iscsi/iscsi.o 00:02:26.041 CC lib/iscsi/param.o 00:02:26.041 CC lib/iscsi/portal_grp.o 00:02:26.041 CC lib/iscsi/iscsi_rpc.o 00:02:26.041 CC lib/iscsi/tgt_node.o 00:02:26.041 CC lib/iscsi/iscsi_subsystem.o 00:02:26.041 CC lib/iscsi/task.o 00:02:26.299 SO libspdk_ftl.so.9.0 00:02:26.559 SYMLINK libspdk_ftl.so 00:02:27.126 LIB libspdk_vhost.a 00:02:27.126 SO libspdk_vhost.so.8.0 00:02:27.126 SYMLINK libspdk_vhost.so 00:02:27.385 LIB libspdk_nvmf.a 00:02:27.385 SO libspdk_nvmf.so.18.1 00:02:27.385 LIB libspdk_iscsi.a 00:02:27.385 SO libspdk_iscsi.so.8.0 00:02:27.644 SYMLINK libspdk_nvmf.so 00:02:27.644 SYMLINK libspdk_iscsi.so 00:02:28.211 CC module/env_dpdk/env_dpdk_rpc.o 00:02:28.211 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:28.211 CC module/scheduler/gscheduler/gscheduler.o 00:02:28.211 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:28.211 CC module/blob/bdev/blob_bdev.o 00:02:28.211 CC module/sock/posix/posix.o 00:02:28.211 LIB libspdk_env_dpdk_rpc.a 00:02:28.211 CC module/accel/iaa/accel_iaa_rpc.o 00:02:28.211 CC module/accel/iaa/accel_iaa.o 00:02:28.211 CC module/keyring/linux/keyring.o 00:02:28.211 CC module/keyring/linux/keyring_rpc.o 00:02:28.211 CC module/accel/dsa/accel_dsa.o 00:02:28.211 CC module/accel/ioat/accel_ioat.o 00:02:28.211 CC module/accel/error/accel_error.o 00:02:28.211 CC module/accel/dsa/accel_dsa_rpc.o 00:02:28.211 CC module/accel/ioat/accel_ioat_rpc.o 00:02:28.211 CC module/accel/error/accel_error_rpc.o 00:02:28.211 CC module/keyring/file/keyring.o 00:02:28.211 CC module/keyring/file/keyring_rpc.o 00:02:28.211 SO libspdk_env_dpdk_rpc.so.6.0 00:02:28.211 SYMLINK libspdk_env_dpdk_rpc.so 00:02:28.470 LIB libspdk_scheduler_gscheduler.a 00:02:28.470 LIB libspdk_scheduler_dpdk_governor.a 00:02:28.470 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:28.470 LIB libspdk_keyring_linux.a 00:02:28.470 SO libspdk_scheduler_gscheduler.so.4.0 00:02:28.470 LIB libspdk_keyring_file.a 00:02:28.470 LIB libspdk_scheduler_dynamic.a 00:02:28.470 SO libspdk_keyring_linux.so.1.0 00:02:28.470 LIB libspdk_accel_error.a 00:02:28.470 LIB libspdk_accel_iaa.a 00:02:28.470 SO libspdk_keyring_file.so.1.0 00:02:28.470 SYMLINK libspdk_scheduler_gscheduler.so 00:02:28.470 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:28.470 SO libspdk_accel_error.so.2.0 00:02:28.470 LIB libspdk_accel_ioat.a 00:02:28.470 LIB libspdk_blob_bdev.a 00:02:28.470 SO libspdk_accel_iaa.so.3.0 00:02:28.470 SO libspdk_scheduler_dynamic.so.4.0 00:02:28.470 SO libspdk_accel_ioat.so.6.0 00:02:28.470 SYMLINK libspdk_keyring_linux.so 00:02:28.470 SO libspdk_blob_bdev.so.11.0 00:02:28.470 LIB libspdk_accel_dsa.a 00:02:28.470 SYMLINK libspdk_keyring_file.so 00:02:28.470 SYMLINK libspdk_accel_error.so 00:02:28.470 SYMLINK libspdk_scheduler_dynamic.so 00:02:28.470 SYMLINK libspdk_accel_iaa.so 00:02:28.470 SO libspdk_accel_dsa.so.5.0 00:02:28.470 SYMLINK libspdk_blob_bdev.so 00:02:28.470 SYMLINK libspdk_accel_ioat.so 00:02:28.728 SYMLINK libspdk_accel_dsa.so 00:02:29.005 LIB libspdk_sock_posix.a 00:02:29.005 CC module/bdev/error/vbdev_error_rpc.o 00:02:29.005 CC module/bdev/error/vbdev_error.o 00:02:29.005 CC module/bdev/gpt/gpt.o 00:02:29.005 CC module/bdev/null/bdev_null.o 00:02:29.005 CC module/bdev/gpt/vbdev_gpt.o 00:02:29.005 CC module/bdev/null/bdev_null_rpc.o 00:02:29.005 CC module/bdev/delay/vbdev_delay.o 00:02:29.005 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:29.005 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:29.005 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:29.005 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:29.005 SO libspdk_sock_posix.so.6.0 00:02:29.005 CC module/bdev/nvme/bdev_nvme.o 00:02:29.005 CC module/blobfs/bdev/blobfs_bdev.o 00:02:29.005 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:29.005 CC module/bdev/nvme/nvme_rpc.o 00:02:29.005 CC module/bdev/raid/bdev_raid_rpc.o 00:02:29.005 CC module/bdev/nvme/bdev_mdns_client.o 00:02:29.005 CC module/bdev/raid/bdev_raid.o 00:02:29.005 CC module/bdev/nvme/vbdev_opal.o 00:02:29.005 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:29.005 CC module/bdev/raid/raid0.o 00:02:29.005 CC module/bdev/malloc/bdev_malloc.o 00:02:29.005 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:29.005 CC module/bdev/raid/bdev_raid_sb.o 00:02:29.005 CC module/bdev/raid/raid1.o 00:02:29.005 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:29.005 CC module/bdev/aio/bdev_aio.o 00:02:29.005 CC module/bdev/raid/concat.o 00:02:29.005 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:29.005 CC module/bdev/aio/bdev_aio_rpc.o 00:02:29.005 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:29.005 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:29.005 CC module/bdev/iscsi/bdev_iscsi.o 00:02:29.005 CC module/bdev/ftl/bdev_ftl.o 00:02:29.005 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:29.005 CC module/bdev/split/vbdev_split_rpc.o 00:02:29.005 CC module/bdev/split/vbdev_split.o 00:02:29.005 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:29.005 CC module/bdev/passthru/vbdev_passthru.o 00:02:29.005 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:29.005 CC module/bdev/lvol/vbdev_lvol.o 00:02:29.006 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:29.006 SYMLINK libspdk_sock_posix.so 00:02:29.268 LIB libspdk_blobfs_bdev.a 00:02:29.268 SO libspdk_blobfs_bdev.so.6.0 00:02:29.268 LIB libspdk_bdev_null.a 00:02:29.268 LIB libspdk_bdev_split.a 00:02:29.268 LIB libspdk_bdev_error.a 00:02:29.268 SO libspdk_bdev_null.so.6.0 00:02:29.268 LIB libspdk_bdev_gpt.a 00:02:29.268 SO libspdk_bdev_split.so.6.0 00:02:29.268 SYMLINK libspdk_blobfs_bdev.so 00:02:29.268 SO libspdk_bdev_error.so.6.0 00:02:29.268 LIB libspdk_bdev_ftl.a 00:02:29.268 SO libspdk_bdev_gpt.so.6.0 00:02:29.268 SO libspdk_bdev_ftl.so.6.0 00:02:29.268 LIB libspdk_bdev_aio.a 00:02:29.268 SYMLINK libspdk_bdev_null.so 00:02:29.268 LIB libspdk_bdev_passthru.a 00:02:29.268 SYMLINK libspdk_bdev_split.so 00:02:29.268 SO libspdk_bdev_aio.so.6.0 00:02:29.526 LIB libspdk_bdev_zone_block.a 00:02:29.526 SO libspdk_bdev_passthru.so.6.0 00:02:29.526 SYMLINK libspdk_bdev_error.so 00:02:29.526 LIB libspdk_bdev_malloc.a 00:02:29.526 SYMLINK libspdk_bdev_gpt.so 00:02:29.526 LIB libspdk_bdev_delay.a 00:02:29.526 SYMLINK libspdk_bdev_ftl.so 00:02:29.526 SO libspdk_bdev_delay.so.6.0 00:02:29.526 LIB libspdk_bdev_iscsi.a 00:02:29.526 SO libspdk_bdev_zone_block.so.6.0 00:02:29.526 SO libspdk_bdev_malloc.so.6.0 00:02:29.526 SYMLINK libspdk_bdev_aio.so 00:02:29.526 SYMLINK libspdk_bdev_passthru.so 00:02:29.526 SO libspdk_bdev_iscsi.so.6.0 00:02:29.526 SYMLINK libspdk_bdev_delay.so 00:02:29.526 SYMLINK libspdk_bdev_zone_block.so 00:02:29.526 SYMLINK libspdk_bdev_malloc.so 00:02:29.526 LIB libspdk_bdev_virtio.a 00:02:29.526 SYMLINK libspdk_bdev_iscsi.so 00:02:29.526 LIB libspdk_bdev_lvol.a 00:02:29.526 SO libspdk_bdev_virtio.so.6.0 00:02:29.526 SO libspdk_bdev_lvol.so.6.0 00:02:29.526 SYMLINK libspdk_bdev_virtio.so 00:02:29.784 SYMLINK libspdk_bdev_lvol.so 00:02:30.042 LIB libspdk_bdev_raid.a 00:02:30.042 SO libspdk_bdev_raid.so.6.0 00:02:30.042 SYMLINK libspdk_bdev_raid.so 00:02:30.976 LIB libspdk_bdev_nvme.a 00:02:30.976 SO libspdk_bdev_nvme.so.7.0 00:02:31.236 SYMLINK libspdk_bdev_nvme.so 00:02:31.804 CC module/event/subsystems/iobuf/iobuf.o 00:02:31.804 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:31.804 CC module/event/subsystems/sock/sock.o 00:02:31.804 CC module/event/subsystems/scheduler/scheduler.o 00:02:31.804 CC module/event/subsystems/vmd/vmd.o 00:02:31.804 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:31.804 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:31.804 CC module/event/subsystems/keyring/keyring.o 00:02:31.804 LIB libspdk_event_scheduler.a 00:02:31.804 LIB libspdk_event_sock.a 00:02:31.804 LIB libspdk_event_vhost_blk.a 00:02:31.804 LIB libspdk_event_iobuf.a 00:02:32.063 LIB libspdk_event_vmd.a 00:02:32.063 SO libspdk_event_scheduler.so.4.0 00:02:32.063 SO libspdk_event_sock.so.5.0 00:02:32.063 LIB libspdk_event_keyring.a 00:02:32.063 SO libspdk_event_vhost_blk.so.3.0 00:02:32.063 SO libspdk_event_iobuf.so.3.0 00:02:32.063 SO libspdk_event_vmd.so.6.0 00:02:32.063 SO libspdk_event_keyring.so.1.0 00:02:32.063 SYMLINK libspdk_event_sock.so 00:02:32.063 SYMLINK libspdk_event_scheduler.so 00:02:32.063 SYMLINK libspdk_event_vhost_blk.so 00:02:32.063 SYMLINK libspdk_event_iobuf.so 00:02:32.063 SYMLINK libspdk_event_keyring.so 00:02:32.063 SYMLINK libspdk_event_vmd.so 00:02:32.323 CC module/event/subsystems/accel/accel.o 00:02:32.582 LIB libspdk_event_accel.a 00:02:32.582 SO libspdk_event_accel.so.6.0 00:02:32.582 SYMLINK libspdk_event_accel.so 00:02:32.841 CC module/event/subsystems/bdev/bdev.o 00:02:33.100 LIB libspdk_event_bdev.a 00:02:33.100 SO libspdk_event_bdev.so.6.0 00:02:33.100 SYMLINK libspdk_event_bdev.so 00:02:33.359 CC module/event/subsystems/scsi/scsi.o 00:02:33.359 CC module/event/subsystems/nbd/nbd.o 00:02:33.359 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:33.359 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:33.359 CC module/event/subsystems/ublk/ublk.o 00:02:33.359 LIB libspdk_event_scsi.a 00:02:33.617 SO libspdk_event_scsi.so.6.0 00:02:33.617 LIB libspdk_event_nbd.a 00:02:33.617 LIB libspdk_event_ublk.a 00:02:33.617 SO libspdk_event_nbd.so.6.0 00:02:33.617 SYMLINK libspdk_event_scsi.so 00:02:33.617 SO libspdk_event_ublk.so.3.0 00:02:33.617 LIB libspdk_event_nvmf.a 00:02:33.617 SYMLINK libspdk_event_nbd.so 00:02:33.617 SO libspdk_event_nvmf.so.6.0 00:02:33.617 SYMLINK libspdk_event_ublk.so 00:02:33.617 SYMLINK libspdk_event_nvmf.so 00:02:33.876 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:33.876 CC module/event/subsystems/iscsi/iscsi.o 00:02:33.876 LIB libspdk_event_vhost_scsi.a 00:02:33.876 LIB libspdk_event_iscsi.a 00:02:33.876 SO libspdk_event_vhost_scsi.so.3.0 00:02:34.134 SO libspdk_event_iscsi.so.6.0 00:02:34.134 SYMLINK libspdk_event_vhost_scsi.so 00:02:34.134 SYMLINK libspdk_event_iscsi.so 00:02:34.134 SO libspdk.so.6.0 00:02:34.134 SYMLINK libspdk.so 00:02:34.714 TEST_HEADER include/spdk/accel_module.h 00:02:34.715 CXX app/trace/trace.o 00:02:34.715 TEST_HEADER include/spdk/assert.h 00:02:34.715 TEST_HEADER include/spdk/accel.h 00:02:34.715 TEST_HEADER include/spdk/bdev.h 00:02:34.715 TEST_HEADER include/spdk/barrier.h 00:02:34.715 TEST_HEADER include/spdk/base64.h 00:02:34.715 TEST_HEADER include/spdk/bdev_module.h 00:02:34.715 TEST_HEADER include/spdk/bdev_zone.h 00:02:34.715 TEST_HEADER include/spdk/bit_pool.h 00:02:34.715 TEST_HEADER include/spdk/bit_array.h 00:02:34.715 TEST_HEADER include/spdk/blob_bdev.h 00:02:34.715 CC test/rpc_client/rpc_client_test.o 00:02:34.715 CC app/trace_record/trace_record.o 00:02:34.715 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:34.715 TEST_HEADER include/spdk/blobfs.h 00:02:34.715 TEST_HEADER include/spdk/config.h 00:02:34.715 TEST_HEADER include/spdk/blob.h 00:02:34.715 TEST_HEADER include/spdk/conf.h 00:02:34.715 TEST_HEADER include/spdk/cpuset.h 00:02:34.715 TEST_HEADER include/spdk/crc64.h 00:02:34.715 TEST_HEADER include/spdk/crc16.h 00:02:34.715 TEST_HEADER include/spdk/crc32.h 00:02:34.715 CC app/spdk_lspci/spdk_lspci.o 00:02:34.715 TEST_HEADER include/spdk/endian.h 00:02:34.715 TEST_HEADER include/spdk/dma.h 00:02:34.715 TEST_HEADER include/spdk/env_dpdk.h 00:02:34.715 TEST_HEADER include/spdk/dif.h 00:02:34.715 TEST_HEADER include/spdk/env.h 00:02:34.715 TEST_HEADER include/spdk/event.h 00:02:34.715 TEST_HEADER include/spdk/fd_group.h 00:02:34.715 CC app/spdk_nvme_perf/perf.o 00:02:34.715 TEST_HEADER include/spdk/file.h 00:02:34.715 TEST_HEADER include/spdk/fd.h 00:02:34.715 TEST_HEADER include/spdk/ftl.h 00:02:34.715 TEST_HEADER include/spdk/hexlify.h 00:02:34.715 TEST_HEADER include/spdk/histogram_data.h 00:02:34.715 CC app/spdk_top/spdk_top.o 00:02:34.715 TEST_HEADER include/spdk/gpt_spec.h 00:02:34.715 TEST_HEADER include/spdk/idxd.h 00:02:34.715 TEST_HEADER include/spdk/idxd_spec.h 00:02:34.715 TEST_HEADER include/spdk/init.h 00:02:34.715 CC app/spdk_nvme_identify/identify.o 00:02:34.715 TEST_HEADER include/spdk/ioat_spec.h 00:02:34.715 TEST_HEADER include/spdk/iscsi_spec.h 00:02:34.715 TEST_HEADER include/spdk/ioat.h 00:02:34.715 TEST_HEADER include/spdk/json.h 00:02:34.715 TEST_HEADER include/spdk/jsonrpc.h 00:02:34.715 TEST_HEADER include/spdk/keyring_module.h 00:02:34.715 TEST_HEADER include/spdk/keyring.h 00:02:34.715 TEST_HEADER include/spdk/likely.h 00:02:34.715 TEST_HEADER include/spdk/log.h 00:02:34.715 TEST_HEADER include/spdk/lvol.h 00:02:34.715 TEST_HEADER include/spdk/memory.h 00:02:34.715 TEST_HEADER include/spdk/mmio.h 00:02:34.715 TEST_HEADER include/spdk/nbd.h 00:02:34.715 CC app/spdk_nvme_discover/discovery_aer.o 00:02:34.715 TEST_HEADER include/spdk/notify.h 00:02:34.715 TEST_HEADER include/spdk/nvme.h 00:02:34.715 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:34.715 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:34.715 TEST_HEADER include/spdk/nvme_spec.h 00:02:34.715 TEST_HEADER include/spdk/nvme_intel.h 00:02:34.715 TEST_HEADER include/spdk/nvme_zns.h 00:02:34.715 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:34.715 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:34.715 TEST_HEADER include/spdk/nvmf.h 00:02:34.715 TEST_HEADER include/spdk/nvmf_spec.h 00:02:34.715 TEST_HEADER include/spdk/nvmf_transport.h 00:02:34.715 TEST_HEADER include/spdk/opal.h 00:02:34.715 TEST_HEADER include/spdk/opal_spec.h 00:02:34.715 TEST_HEADER include/spdk/pci_ids.h 00:02:34.715 TEST_HEADER include/spdk/queue.h 00:02:34.715 TEST_HEADER include/spdk/pipe.h 00:02:34.715 TEST_HEADER include/spdk/reduce.h 00:02:34.715 TEST_HEADER include/spdk/scheduler.h 00:02:34.715 TEST_HEADER include/spdk/scsi.h 00:02:34.715 TEST_HEADER include/spdk/rpc.h 00:02:34.715 TEST_HEADER include/spdk/scsi_spec.h 00:02:34.715 TEST_HEADER include/spdk/sock.h 00:02:34.715 TEST_HEADER include/spdk/stdinc.h 00:02:34.715 TEST_HEADER include/spdk/string.h 00:02:34.715 TEST_HEADER include/spdk/thread.h 00:02:34.715 TEST_HEADER include/spdk/trace.h 00:02:34.715 TEST_HEADER include/spdk/tree.h 00:02:34.715 TEST_HEADER include/spdk/trace_parser.h 00:02:34.715 TEST_HEADER include/spdk/ublk.h 00:02:34.715 TEST_HEADER include/spdk/util.h 00:02:34.715 TEST_HEADER include/spdk/uuid.h 00:02:34.715 CC app/nvmf_tgt/nvmf_main.o 00:02:34.715 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:34.715 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:34.715 TEST_HEADER include/spdk/vhost.h 00:02:34.715 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:34.715 TEST_HEADER include/spdk/version.h 00:02:34.715 TEST_HEADER include/spdk/vmd.h 00:02:34.715 TEST_HEADER include/spdk/zipf.h 00:02:34.715 CXX test/cpp_headers/accel.o 00:02:34.715 TEST_HEADER include/spdk/xor.h 00:02:34.715 CXX test/cpp_headers/accel_module.o 00:02:34.715 CXX test/cpp_headers/assert.o 00:02:34.715 CXX test/cpp_headers/barrier.o 00:02:34.715 CXX test/cpp_headers/base64.o 00:02:34.715 CXX test/cpp_headers/bdev.o 00:02:34.715 CXX test/cpp_headers/bdev_module.o 00:02:34.715 CXX test/cpp_headers/bit_array.o 00:02:34.715 CXX test/cpp_headers/bit_pool.o 00:02:34.715 CXX test/cpp_headers/blob_bdev.o 00:02:34.715 CXX test/cpp_headers/bdev_zone.o 00:02:34.715 CC app/spdk_dd/spdk_dd.o 00:02:34.715 CXX test/cpp_headers/blobfs_bdev.o 00:02:34.715 CXX test/cpp_headers/blob.o 00:02:34.715 CXX test/cpp_headers/blobfs.o 00:02:34.715 CXX test/cpp_headers/config.o 00:02:34.715 CXX test/cpp_headers/conf.o 00:02:34.715 CXX test/cpp_headers/cpuset.o 00:02:34.715 CXX test/cpp_headers/crc16.o 00:02:34.715 CXX test/cpp_headers/crc32.o 00:02:34.715 CXX test/cpp_headers/crc64.o 00:02:34.715 CXX test/cpp_headers/dma.o 00:02:34.715 CXX test/cpp_headers/dif.o 00:02:34.715 CXX test/cpp_headers/env_dpdk.o 00:02:34.715 CXX test/cpp_headers/env.o 00:02:34.715 CXX test/cpp_headers/event.o 00:02:34.715 CXX test/cpp_headers/fd_group.o 00:02:34.715 CXX test/cpp_headers/fd.o 00:02:34.715 CXX test/cpp_headers/endian.o 00:02:34.715 CXX test/cpp_headers/ftl.o 00:02:34.715 CC app/iscsi_tgt/iscsi_tgt.o 00:02:34.715 CXX test/cpp_headers/file.o 00:02:34.715 CXX test/cpp_headers/gpt_spec.o 00:02:34.715 CXX test/cpp_headers/histogram_data.o 00:02:34.715 CXX test/cpp_headers/hexlify.o 00:02:34.715 CC app/spdk_tgt/spdk_tgt.o 00:02:34.715 CXX test/cpp_headers/idxd.o 00:02:34.715 CXX test/cpp_headers/ioat.o 00:02:34.715 CXX test/cpp_headers/idxd_spec.o 00:02:34.715 CXX test/cpp_headers/ioat_spec.o 00:02:34.715 CXX test/cpp_headers/init.o 00:02:34.715 CXX test/cpp_headers/json.o 00:02:34.715 CXX test/cpp_headers/iscsi_spec.o 00:02:34.715 CXX test/cpp_headers/jsonrpc.o 00:02:34.715 CXX test/cpp_headers/keyring.o 00:02:34.715 CXX test/cpp_headers/keyring_module.o 00:02:34.715 CXX test/cpp_headers/likely.o 00:02:34.715 CXX test/cpp_headers/log.o 00:02:34.715 CXX test/cpp_headers/lvol.o 00:02:34.715 CXX test/cpp_headers/memory.o 00:02:34.715 CXX test/cpp_headers/mmio.o 00:02:34.715 CXX test/cpp_headers/nbd.o 00:02:34.715 CXX test/cpp_headers/notify.o 00:02:34.715 CXX test/cpp_headers/nvme.o 00:02:34.715 CXX test/cpp_headers/nvme_ocssd.o 00:02:34.715 CXX test/cpp_headers/nvme_intel.o 00:02:34.715 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:34.715 CXX test/cpp_headers/nvme_zns.o 00:02:34.715 CXX test/cpp_headers/nvme_spec.o 00:02:34.715 CXX test/cpp_headers/nvmf_cmd.o 00:02:34.715 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:34.715 CXX test/cpp_headers/nvmf.o 00:02:34.715 CXX test/cpp_headers/nvmf_spec.o 00:02:34.715 CXX test/cpp_headers/nvmf_transport.o 00:02:34.715 CXX test/cpp_headers/opal.o 00:02:34.715 CXX test/cpp_headers/opal_spec.o 00:02:34.715 CXX test/cpp_headers/pci_ids.o 00:02:34.715 CXX test/cpp_headers/pipe.o 00:02:34.715 CXX test/cpp_headers/queue.o 00:02:34.715 CXX test/cpp_headers/reduce.o 00:02:34.715 CC test/app/jsoncat/jsoncat.o 00:02:34.715 CC test/app/stub/stub.o 00:02:34.715 CXX test/cpp_headers/rpc.o 00:02:34.715 CC test/app/histogram_perf/histogram_perf.o 00:02:34.715 CC test/env/vtophys/vtophys.o 00:02:34.715 CC test/env/pci/pci_ut.o 00:02:34.715 CC examples/util/zipf/zipf.o 00:02:34.715 CC test/env/memory/memory_ut.o 00:02:34.715 CC examples/ioat/verify/verify.o 00:02:34.715 CC test/thread/poller_perf/poller_perf.o 00:02:34.715 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:34.715 CC examples/ioat/perf/perf.o 00:02:34.715 CC test/app/bdev_svc/bdev_svc.o 00:02:34.715 CXX test/cpp_headers/scheduler.o 00:02:34.715 CC test/dma/test_dma/test_dma.o 00:02:34.715 CC app/fio/nvme/fio_plugin.o 00:02:34.990 CC app/fio/bdev/fio_plugin.o 00:02:34.990 LINK spdk_lspci 00:02:34.990 LINK rpc_client_test 00:02:35.249 LINK jsoncat 00:02:35.249 CC test/env/mem_callbacks/mem_callbacks.o 00:02:35.249 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:35.249 LINK spdk_trace_record 00:02:35.249 LINK iscsi_tgt 00:02:35.249 CXX test/cpp_headers/scsi.o 00:02:35.249 CXX test/cpp_headers/scsi_spec.o 00:02:35.249 CXX test/cpp_headers/sock.o 00:02:35.249 LINK zipf 00:02:35.249 LINK nvmf_tgt 00:02:35.249 CXX test/cpp_headers/stdinc.o 00:02:35.249 CXX test/cpp_headers/string.o 00:02:35.249 CXX test/cpp_headers/thread.o 00:02:35.249 LINK stub 00:02:35.249 LINK spdk_nvme_discover 00:02:35.249 CXX test/cpp_headers/trace.o 00:02:35.250 CXX test/cpp_headers/trace_parser.o 00:02:35.250 CXX test/cpp_headers/tree.o 00:02:35.250 CXX test/cpp_headers/ublk.o 00:02:35.250 CXX test/cpp_headers/util.o 00:02:35.250 CXX test/cpp_headers/uuid.o 00:02:35.250 LINK interrupt_tgt 00:02:35.250 CXX test/cpp_headers/version.o 00:02:35.250 CXX test/cpp_headers/vfio_user_pci.o 00:02:35.250 CXX test/cpp_headers/vfio_user_spec.o 00:02:35.250 CXX test/cpp_headers/vhost.o 00:02:35.250 CXX test/cpp_headers/vmd.o 00:02:35.250 LINK bdev_svc 00:02:35.250 CXX test/cpp_headers/xor.o 00:02:35.250 CXX test/cpp_headers/zipf.o 00:02:35.250 LINK histogram_perf 00:02:35.250 LINK vtophys 00:02:35.250 LINK poller_perf 00:02:35.250 LINK spdk_tgt 00:02:35.250 LINK ioat_perf 00:02:35.250 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:35.250 LINK env_dpdk_post_init 00:02:35.508 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:35.508 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:35.508 LINK verify 00:02:35.508 LINK spdk_dd 00:02:35.508 LINK spdk_trace 00:02:35.508 LINK pci_ut 00:02:35.766 LINK test_dma 00:02:35.766 CC examples/idxd/perf/perf.o 00:02:35.766 LINK nvme_fuzz 00:02:35.766 CC examples/sock/hello_world/hello_sock.o 00:02:35.766 CC examples/vmd/lsvmd/lsvmd.o 00:02:35.766 LINK spdk_bdev 00:02:35.766 CC examples/vmd/led/led.o 00:02:35.766 CC examples/thread/thread/thread_ex.o 00:02:35.766 CC test/event/reactor_perf/reactor_perf.o 00:02:35.766 CC test/event/reactor/reactor.o 00:02:35.766 CC test/event/event_perf/event_perf.o 00:02:35.767 CC test/event/app_repeat/app_repeat.o 00:02:35.767 LINK spdk_nvme 00:02:35.767 LINK vhost_fuzz 00:02:35.767 CC test/event/scheduler/scheduler.o 00:02:36.026 CC app/vhost/vhost.o 00:02:36.026 LINK mem_callbacks 00:02:36.026 LINK spdk_nvme_identify 00:02:36.026 LINK lsvmd 00:02:36.026 LINK reactor_perf 00:02:36.026 LINK event_perf 00:02:36.026 LINK led 00:02:36.026 LINK reactor 00:02:36.026 LINK spdk_nvme_perf 00:02:36.026 LINK app_repeat 00:02:36.026 LINK hello_sock 00:02:36.026 LINK spdk_top 00:02:36.026 LINK thread 00:02:36.026 CC test/nvme/aer/aer.o 00:02:36.026 CC test/nvme/reset/reset.o 00:02:36.026 CC test/nvme/boot_partition/boot_partition.o 00:02:36.026 CC test/nvme/sgl/sgl.o 00:02:36.026 CC test/nvme/overhead/overhead.o 00:02:36.026 CC test/nvme/simple_copy/simple_copy.o 00:02:36.026 CC test/nvme/e2edp/nvme_dp.o 00:02:36.026 CC test/nvme/fused_ordering/fused_ordering.o 00:02:36.026 CC test/nvme/err_injection/err_injection.o 00:02:36.026 CC test/nvme/startup/startup.o 00:02:36.026 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:36.026 CC test/nvme/connect_stress/connect_stress.o 00:02:36.026 CC test/nvme/cuse/cuse.o 00:02:36.026 LINK idxd_perf 00:02:36.026 CC test/nvme/reserve/reserve.o 00:02:36.026 LINK scheduler 00:02:36.026 CC test/nvme/compliance/nvme_compliance.o 00:02:36.026 CC test/nvme/fdp/fdp.o 00:02:36.026 CC test/blobfs/mkfs/mkfs.o 00:02:36.026 LINK vhost 00:02:36.026 CC test/accel/dif/dif.o 00:02:36.285 CC test/lvol/esnap/esnap.o 00:02:36.285 LINK boot_partition 00:02:36.285 LINK memory_ut 00:02:36.285 LINK startup 00:02:36.285 LINK connect_stress 00:02:36.285 LINK doorbell_aers 00:02:36.285 LINK err_injection 00:02:36.285 LINK fused_ordering 00:02:36.285 LINK reserve 00:02:36.285 LINK simple_copy 00:02:36.285 LINK mkfs 00:02:36.285 LINK sgl 00:02:36.285 LINK reset 00:02:36.285 LINK overhead 00:02:36.285 LINK nvme_dp 00:02:36.285 LINK aer 00:02:36.543 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:36.543 CC examples/nvme/arbitration/arbitration.o 00:02:36.543 LINK fdp 00:02:36.543 CC examples/nvme/hotplug/hotplug.o 00:02:36.543 CC examples/nvme/reconnect/reconnect.o 00:02:36.543 CC examples/nvme/hello_world/hello_world.o 00:02:36.543 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:36.543 CC examples/nvme/abort/abort.o 00:02:36.543 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:36.543 LINK nvme_compliance 00:02:36.543 CC examples/accel/perf/accel_perf.o 00:02:36.543 CC examples/blob/hello_world/hello_blob.o 00:02:36.543 CC examples/blob/cli/blobcli.o 00:02:36.543 LINK dif 00:02:36.543 LINK pmr_persistence 00:02:36.543 LINK cmb_copy 00:02:36.802 LINK hotplug 00:02:36.802 LINK hello_world 00:02:36.802 LINK arbitration 00:02:36.802 LINK hello_blob 00:02:36.802 LINK reconnect 00:02:36.802 LINK abort 00:02:36.802 LINK nvme_manage 00:02:37.060 LINK accel_perf 00:02:37.060 LINK blobcli 00:02:37.060 CC test/bdev/bdevio/bdevio.o 00:02:37.060 LINK iscsi_fuzz 00:02:37.319 LINK cuse 00:02:37.578 CC examples/bdev/hello_world/hello_bdev.o 00:02:37.578 CC examples/bdev/bdevperf/bdevperf.o 00:02:37.578 LINK bdevio 00:02:37.578 LINK hello_bdev 00:02:38.146 LINK bdevperf 00:02:38.716 CC examples/nvmf/nvmf/nvmf.o 00:02:38.979 LINK nvmf 00:02:40.953 LINK esnap 00:02:41.212 00:02:41.212 real 0m47.567s 00:02:41.212 user 6m55.104s 00:02:41.212 sys 3m13.348s 00:02:41.212 23:06:50 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:41.212 23:06:50 make -- common/autotest_common.sh@10 -- $ set +x 00:02:41.212 ************************************ 00:02:41.212 END TEST make 00:02:41.212 ************************************ 00:02:41.212 23:06:50 -- common/autotest_common.sh@1142 -- $ return 0 00:02:41.212 23:06:50 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:41.212 23:06:50 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:41.212 23:06:50 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:41.212 23:06:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.212 23:06:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:41.212 23:06:50 -- pm/common@44 -- $ pid=2110033 00:02:41.212 23:06:50 -- pm/common@50 -- $ kill -TERM 2110033 00:02:41.212 23:06:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.212 23:06:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:41.212 23:06:50 -- pm/common@44 -- $ pid=2110034 00:02:41.212 23:06:50 -- pm/common@50 -- $ kill -TERM 2110034 00:02:41.212 23:06:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.212 23:06:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:41.212 23:06:50 -- pm/common@44 -- $ pid=2110036 00:02:41.212 23:06:50 -- pm/common@50 -- $ kill -TERM 2110036 00:02:41.212 23:06:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.212 23:06:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:41.212 23:06:50 -- pm/common@44 -- $ pid=2110060 00:02:41.212 23:06:50 -- pm/common@50 -- $ sudo -E kill -TERM 2110060 00:02:41.212 23:06:50 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:41.212 23:06:50 -- nvmf/common.sh@7 -- # uname -s 00:02:41.212 23:06:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:41.212 23:06:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:41.212 23:06:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:41.212 23:06:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:41.212 23:06:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:41.212 23:06:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:41.212 23:06:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:41.212 23:06:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:41.212 23:06:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:41.212 23:06:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:41.212 23:06:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:41.212 23:06:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:41.212 23:06:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:41.212 23:06:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:41.212 23:06:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:41.212 23:06:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:41.212 23:06:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:41.212 23:06:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:41.212 23:06:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:41.212 23:06:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:41.212 23:06:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.213 23:06:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.213 23:06:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.213 23:06:50 -- paths/export.sh@5 -- # export PATH 00:02:41.213 23:06:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.213 23:06:50 -- nvmf/common.sh@47 -- # : 0 00:02:41.213 23:06:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:41.213 23:06:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:41.213 23:06:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:41.213 23:06:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:41.213 23:06:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:41.213 23:06:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:41.213 23:06:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:41.213 23:06:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:41.213 23:06:50 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:41.213 23:06:50 -- spdk/autotest.sh@32 -- # uname -s 00:02:41.213 23:06:50 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:41.213 23:06:50 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:41.213 23:06:50 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:41.471 23:06:50 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:41.471 23:06:50 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:41.471 23:06:50 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:41.471 23:06:50 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:41.471 23:06:50 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:41.471 23:06:50 -- spdk/autotest.sh@48 -- # udevadm_pid=2169789 00:02:41.471 23:06:50 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:41.471 23:06:50 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:41.471 23:06:50 -- pm/common@17 -- # local monitor 00:02:41.471 23:06:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.471 23:06:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.471 23:06:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.471 23:06:50 -- pm/common@21 -- # date +%s 00:02:41.471 23:06:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.471 23:06:50 -- pm/common@21 -- # date +%s 00:02:41.471 23:06:50 -- pm/common@25 -- # sleep 1 00:02:41.471 23:06:50 -- pm/common@21 -- # date +%s 00:02:41.471 23:06:50 -- pm/common@21 -- # date +%s 00:02:41.471 23:06:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720645610 00:02:41.471 23:06:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720645610 00:02:41.472 23:06:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720645610 00:02:41.472 23:06:50 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720645610 00:02:41.472 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720645610_collect-vmstat.pm.log 00:02:41.472 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720645610_collect-cpu-temp.pm.log 00:02:41.472 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720645610_collect-cpu-load.pm.log 00:02:41.472 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720645610_collect-bmc-pm.bmc.pm.log 00:02:42.407 23:06:51 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:42.407 23:06:51 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:42.407 23:06:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:42.407 23:06:51 -- common/autotest_common.sh@10 -- # set +x 00:02:42.407 23:06:51 -- spdk/autotest.sh@59 -- # create_test_list 00:02:42.407 23:06:51 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:42.407 23:06:51 -- common/autotest_common.sh@10 -- # set +x 00:02:42.407 23:06:51 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:42.407 23:06:51 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.407 23:06:51 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.407 23:06:51 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:42.407 23:06:51 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.407 23:06:51 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:42.407 23:06:51 -- common/autotest_common.sh@1455 -- # uname 00:02:42.407 23:06:51 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:42.407 23:06:51 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:42.407 23:06:51 -- common/autotest_common.sh@1475 -- # uname 00:02:42.407 23:06:51 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:42.407 23:06:51 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:42.407 23:06:51 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:42.407 23:06:51 -- spdk/autotest.sh@72 -- # hash lcov 00:02:42.407 23:06:51 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:42.407 23:06:51 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:42.407 --rc lcov_branch_coverage=1 00:02:42.407 --rc lcov_function_coverage=1 00:02:42.407 --rc genhtml_branch_coverage=1 00:02:42.407 --rc genhtml_function_coverage=1 00:02:42.407 --rc genhtml_legend=1 00:02:42.407 --rc geninfo_all_blocks=1 00:02:42.407 ' 00:02:42.407 23:06:51 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:42.407 --rc lcov_branch_coverage=1 00:02:42.407 --rc lcov_function_coverage=1 00:02:42.407 --rc genhtml_branch_coverage=1 00:02:42.407 --rc genhtml_function_coverage=1 00:02:42.407 --rc genhtml_legend=1 00:02:42.407 --rc geninfo_all_blocks=1 00:02:42.407 ' 00:02:42.407 23:06:51 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:42.407 --rc lcov_branch_coverage=1 00:02:42.407 --rc lcov_function_coverage=1 00:02:42.407 --rc genhtml_branch_coverage=1 00:02:42.407 --rc genhtml_function_coverage=1 00:02:42.407 --rc genhtml_legend=1 00:02:42.407 --rc geninfo_all_blocks=1 00:02:42.407 --no-external' 00:02:42.407 23:06:51 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:42.407 --rc lcov_branch_coverage=1 00:02:42.407 --rc lcov_function_coverage=1 00:02:42.407 --rc genhtml_branch_coverage=1 00:02:42.407 --rc genhtml_function_coverage=1 00:02:42.407 --rc genhtml_legend=1 00:02:42.407 --rc geninfo_all_blocks=1 00:02:42.407 --no-external' 00:02:42.407 23:06:51 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:42.407 lcov: LCOV version 1.14 00:02:42.407 23:06:51 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:54.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:54.616 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:02.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:02.725 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:02.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:02.725 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:02.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:02.725 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:02.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:02.725 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:02.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:02.725 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:02.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:02.725 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:02.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:02.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:02.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:02.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:06.007 23:07:14 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:06.007 23:07:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:06.007 23:07:14 -- common/autotest_common.sh@10 -- # set +x 00:03:06.007 23:07:14 -- spdk/autotest.sh@91 -- # rm -f 00:03:06.007 23:07:14 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.540 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:08.540 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:08.540 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:08.540 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:08.540 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:08.540 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:08.540 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:08.540 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:08.540 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:08.540 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:08.540 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:08.540 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:08.540 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:08.540 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:08.540 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:08.540 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:08.540 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:08.540 23:07:17 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:08.540 23:07:17 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:08.540 23:07:17 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:08.540 23:07:17 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:08.540 23:07:17 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:08.540 23:07:17 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:08.540 23:07:17 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:08.541 23:07:17 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.541 23:07:17 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:08.541 23:07:17 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:08.541 23:07:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:08.541 23:07:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:08.541 23:07:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:08.541 23:07:17 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:08.541 23:07:17 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:08.541 No valid GPT data, bailing 00:03:08.541 23:07:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:08.541 23:07:17 -- scripts/common.sh@391 -- # pt= 00:03:08.541 23:07:17 -- scripts/common.sh@392 -- # return 1 00:03:08.541 23:07:17 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:08.541 1+0 records in 00:03:08.541 1+0 records out 00:03:08.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495096 s, 212 MB/s 00:03:08.541 23:07:17 -- spdk/autotest.sh@118 -- # sync 00:03:08.541 23:07:17 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:08.541 23:07:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:08.541 23:07:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:13.811 23:07:22 -- spdk/autotest.sh@124 -- # uname -s 00:03:13.811 23:07:22 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:13.811 23:07:22 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:13.811 23:07:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.811 23:07:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.811 23:07:22 -- common/autotest_common.sh@10 -- # set +x 00:03:13.811 ************************************ 00:03:13.811 START TEST setup.sh 00:03:13.811 ************************************ 00:03:13.811 23:07:22 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:13.811 * Looking for test storage... 00:03:13.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:13.811 23:07:22 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:13.811 23:07:22 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:13.811 23:07:22 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:13.811 23:07:22 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.811 23:07:22 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.811 23:07:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:13.811 ************************************ 00:03:13.811 START TEST acl 00:03:13.811 ************************************ 00:03:13.811 23:07:22 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:13.811 * Looking for test storage... 00:03:13.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:13.811 23:07:22 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:13.811 23:07:22 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:13.811 23:07:22 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:13.811 23:07:22 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:13.811 23:07:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:13.811 23:07:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:13.811 23:07:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:13.811 23:07:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:13.811 23:07:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:13.811 23:07:22 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:13.811 23:07:22 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:13.811 23:07:22 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:13.811 23:07:22 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:13.811 23:07:22 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:13.811 23:07:22 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.811 23:07:22 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.140 23:07:25 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:17.140 23:07:25 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:17.140 23:07:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.140 23:07:25 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:17.140 23:07:25 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.140 23:07:25 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:19.675 Hugepages 00:03:19.676 node hugesize free / total 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 00:03:19.676 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:19.676 23:07:28 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:19.676 23:07:28 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.676 23:07:28 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.676 23:07:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:19.676 ************************************ 00:03:19.676 START TEST denied 00:03:19.676 ************************************ 00:03:19.676 23:07:28 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:19.676 23:07:28 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:19.676 23:07:28 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:19.676 23:07:28 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:19.676 23:07:28 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.676 23:07:28 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:22.961 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:22.961 23:07:31 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:22.961 23:07:31 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:22.961 23:07:31 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:22.961 23:07:31 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:22.961 23:07:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:22.961 23:07:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:22.961 23:07:31 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:22.961 23:07:31 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:22.961 23:07:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:22.961 23:07:31 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.247 00:03:26.247 real 0m6.800s 00:03:26.247 user 0m2.173s 00:03:26.247 sys 0m3.931s 00:03:26.247 23:07:35 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.247 23:07:35 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:26.247 ************************************ 00:03:26.247 END TEST denied 00:03:26.247 ************************************ 00:03:26.247 23:07:35 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:26.247 23:07:35 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:26.247 23:07:35 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.247 23:07:35 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.247 23:07:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:26.247 ************************************ 00:03:26.247 START TEST allowed 00:03:26.247 ************************************ 00:03:26.247 23:07:35 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:26.247 23:07:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:26.247 23:07:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:26.247 23:07:35 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:26.247 23:07:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.247 23:07:35 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:30.438 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:30.438 23:07:39 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:30.438 23:07:39 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:30.438 23:07:39 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:30.438 23:07:39 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.438 23:07:39 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.727 00:03:33.727 real 0m6.862s 00:03:33.727 user 0m2.157s 00:03:33.727 sys 0m3.842s 00:03:33.727 23:07:42 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.727 23:07:42 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:33.727 ************************************ 00:03:33.727 END TEST allowed 00:03:33.727 ************************************ 00:03:33.727 23:07:42 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:33.727 00:03:33.727 real 0m19.406s 00:03:33.727 user 0m6.409s 00:03:33.727 sys 0m11.500s 00:03:33.727 23:07:42 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.727 23:07:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:33.727 ************************************ 00:03:33.727 END TEST acl 00:03:33.727 ************************************ 00:03:33.727 23:07:42 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:33.727 23:07:42 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:33.727 23:07:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.727 23:07:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.727 23:07:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:33.727 ************************************ 00:03:33.727 START TEST hugepages 00:03:33.727 ************************************ 00:03:33.727 23:07:42 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:33.727 * Looking for test storage... 00:03:33.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173298096 kB' 'MemAvailable: 176150696 kB' 'Buffers: 3896 kB' 'Cached: 10266040 kB' 'SwapCached: 0 kB' 'Active: 7266332 kB' 'Inactive: 3493052 kB' 'Active(anon): 6878448 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492960 kB' 'Mapped: 203188 kB' 'Shmem: 6389000 kB' 'KReclaimable: 232232 kB' 'Slab: 804524 kB' 'SReclaimable: 232232 kB' 'SUnreclaim: 572292 kB' 'KernelStack: 20544 kB' 'PageTables: 9080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982040 kB' 'Committed_AS: 8373408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314888 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.727 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.728 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.729 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.729 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.729 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:33.729 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.729 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.729 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.729 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.729 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:33.729 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:33.729 23:07:42 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:33.729 23:07:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.729 23:07:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.729 23:07:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:33.729 ************************************ 00:03:33.729 START TEST default_setup 00:03:33.729 ************************************ 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.729 23:07:42 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:36.255 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:36.255 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:36.255 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:36.255 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:36.255 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:36.255 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:36.255 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:36.255 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:36.255 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:36.255 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:36.255 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:36.255 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:36.255 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:36.255 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:36.255 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:36.255 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:37.193 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.193 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175447104 kB' 'MemAvailable: 178299672 kB' 'Buffers: 3896 kB' 'Cached: 10266144 kB' 'SwapCached: 0 kB' 'Active: 7285208 kB' 'Inactive: 3493052 kB' 'Active(anon): 6897324 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511616 kB' 'Mapped: 203064 kB' 'Shmem: 6389104 kB' 'KReclaimable: 232168 kB' 'Slab: 802948 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570780 kB' 'KernelStack: 20816 kB' 'PageTables: 9808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8393576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315112 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.194 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175447536 kB' 'MemAvailable: 178300104 kB' 'Buffers: 3896 kB' 'Cached: 10266160 kB' 'SwapCached: 0 kB' 'Active: 7284168 kB' 'Inactive: 3493052 kB' 'Active(anon): 6896284 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511016 kB' 'Mapped: 203016 kB' 'Shmem: 6389120 kB' 'KReclaimable: 232168 kB' 'Slab: 802948 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570780 kB' 'KernelStack: 20752 kB' 'PageTables: 9544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8392104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315000 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.195 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.460 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.460 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.460 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.460 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.460 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.460 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.460 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.460 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.460 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.460 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.460 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.460 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.460 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.460 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.460 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.461 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175449856 kB' 'MemAvailable: 178302424 kB' 'Buffers: 3896 kB' 'Cached: 10266164 kB' 'SwapCached: 0 kB' 'Active: 7284524 kB' 'Inactive: 3493052 kB' 'Active(anon): 6896640 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511332 kB' 'Mapped: 202940 kB' 'Shmem: 6389124 kB' 'KReclaimable: 232168 kB' 'Slab: 802932 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570764 kB' 'KernelStack: 20912 kB' 'PageTables: 9932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8393616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315016 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.462 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.463 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:37.464 nr_hugepages=1024 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.464 resv_hugepages=0 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.464 surplus_hugepages=0 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.464 anon_hugepages=0 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175452616 kB' 'MemAvailable: 178305184 kB' 'Buffers: 3896 kB' 'Cached: 10266184 kB' 'SwapCached: 0 kB' 'Active: 7284372 kB' 'Inactive: 3493052 kB' 'Active(anon): 6896488 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511304 kB' 'Mapped: 202940 kB' 'Shmem: 6389144 kB' 'KReclaimable: 232168 kB' 'Slab: 802924 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570756 kB' 'KernelStack: 20768 kB' 'PageTables: 9380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8393636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315080 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.464 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.465 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85988012 kB' 'MemUsed: 11674672 kB' 'SwapCached: 0 kB' 'Active: 4770112 kB' 'Inactive: 3320472 kB' 'Active(anon): 4523608 kB' 'Inactive(anon): 0 kB' 'Active(file): 246504 kB' 'Inactive(file): 3320472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7917396 kB' 'Mapped: 94152 kB' 'AnonPages: 176336 kB' 'Shmem: 4350420 kB' 'KernelStack: 11976 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96284 kB' 'Slab: 343464 kB' 'SReclaimable: 96284 kB' 'SUnreclaim: 247180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.466 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:37.467 node0=1024 expecting 1024 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:37.467 00:03:37.467 real 0m3.963s 00:03:37.467 user 0m1.283s 00:03:37.467 sys 0m1.910s 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.467 23:07:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:37.467 ************************************ 00:03:37.467 END TEST default_setup 00:03:37.467 ************************************ 00:03:37.467 23:07:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:37.467 23:07:46 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:37.467 23:07:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.467 23:07:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.467 23:07:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:37.467 ************************************ 00:03:37.467 START TEST per_node_1G_alloc 00:03:37.467 ************************************ 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.467 23:07:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:40.012 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.012 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:40.012 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.012 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.012 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.012 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.012 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.012 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.012 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.012 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.012 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.012 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.012 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.012 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.012 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.012 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.012 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175460976 kB' 'MemAvailable: 178313544 kB' 'Buffers: 3896 kB' 'Cached: 10266288 kB' 'SwapCached: 0 kB' 'Active: 7284820 kB' 'Inactive: 3493052 kB' 'Active(anon): 6896936 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511032 kB' 'Mapped: 203008 kB' 'Shmem: 6389248 kB' 'KReclaimable: 232168 kB' 'Slab: 803104 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570936 kB' 'KernelStack: 20512 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8392616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315080 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.012 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.013 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175462804 kB' 'MemAvailable: 178315372 kB' 'Buffers: 3896 kB' 'Cached: 10266292 kB' 'SwapCached: 0 kB' 'Active: 7285384 kB' 'Inactive: 3493052 kB' 'Active(anon): 6897500 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511612 kB' 'Mapped: 202972 kB' 'Shmem: 6389252 kB' 'KReclaimable: 232168 kB' 'Slab: 803152 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570984 kB' 'KernelStack: 20432 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8393004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315048 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.014 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.015 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175458884 kB' 'MemAvailable: 178311452 kB' 'Buffers: 3896 kB' 'Cached: 10266308 kB' 'SwapCached: 0 kB' 'Active: 7285824 kB' 'Inactive: 3493052 kB' 'Active(anon): 6897940 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511744 kB' 'Mapped: 202972 kB' 'Shmem: 6389268 kB' 'KReclaimable: 232168 kB' 'Slab: 803152 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570984 kB' 'KernelStack: 20784 kB' 'PageTables: 9228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8394148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315160 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.016 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.017 nr_hugepages=1024 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.017 resv_hugepages=0 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.017 surplus_hugepages=0 00:03:40.017 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.017 anon_hugepages=0 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175465744 kB' 'MemAvailable: 178318312 kB' 'Buffers: 3896 kB' 'Cached: 10266332 kB' 'SwapCached: 0 kB' 'Active: 7286432 kB' 'Inactive: 3493052 kB' 'Active(anon): 6898548 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512280 kB' 'Mapped: 202972 kB' 'Shmem: 6389292 kB' 'KReclaimable: 232168 kB' 'Slab: 803152 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570984 kB' 'KernelStack: 21056 kB' 'PageTables: 10396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8394172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315256 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.018 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.019 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87069116 kB' 'MemUsed: 10593568 kB' 'SwapCached: 0 kB' 'Active: 4771264 kB' 'Inactive: 3320472 kB' 'Active(anon): 4524760 kB' 'Inactive(anon): 0 kB' 'Active(file): 246504 kB' 'Inactive(file): 3320472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7917404 kB' 'Mapped: 94164 kB' 'AnonPages: 177236 kB' 'Shmem: 4350428 kB' 'KernelStack: 12632 kB' 'PageTables: 6284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96284 kB' 'Slab: 343476 kB' 'SReclaimable: 96284 kB' 'SUnreclaim: 247192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.020 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.021 23:07:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718492 kB' 'MemFree: 88394848 kB' 'MemUsed: 5323644 kB' 'SwapCached: 0 kB' 'Active: 2515716 kB' 'Inactive: 172580 kB' 'Active(anon): 2374336 kB' 'Inactive(anon): 0 kB' 'Active(file): 141380 kB' 'Inactive(file): 172580 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2352868 kB' 'Mapped: 108808 kB' 'AnonPages: 335564 kB' 'Shmem: 2038908 kB' 'KernelStack: 8744 kB' 'PageTables: 5092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135884 kB' 'Slab: 459676 kB' 'SReclaimable: 135884 kB' 'SUnreclaim: 323792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.021 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:40.022 node0=512 expecting 512 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:40.022 node1=512 expecting 512 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:40.022 00:03:40.022 real 0m2.592s 00:03:40.022 user 0m1.034s 00:03:40.022 sys 0m1.578s 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.022 23:07:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:40.022 ************************************ 00:03:40.022 END TEST per_node_1G_alloc 00:03:40.022 ************************************ 00:03:40.022 23:07:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:40.022 23:07:49 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:40.022 23:07:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.022 23:07:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.022 23:07:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.282 ************************************ 00:03:40.282 START TEST even_2G_alloc 00:03:40.282 ************************************ 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.282 23:07:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:42.839 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:42.839 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:42.839 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:42.839 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:42.839 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:42.839 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:42.839 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:42.839 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:42.839 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:42.839 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:42.839 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:42.839 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:42.839 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:42.839 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:42.839 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:42.839 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:42.839 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175485160 kB' 'MemAvailable: 178337728 kB' 'Buffers: 3896 kB' 'Cached: 10266436 kB' 'SwapCached: 0 kB' 'Active: 7281288 kB' 'Inactive: 3493052 kB' 'Active(anon): 6893404 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507276 kB' 'Mapped: 202032 kB' 'Shmem: 6389396 kB' 'KReclaimable: 232168 kB' 'Slab: 802960 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570792 kB' 'KernelStack: 20768 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8382996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315128 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.839 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.840 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175486884 kB' 'MemAvailable: 178339452 kB' 'Buffers: 3896 kB' 'Cached: 10266440 kB' 'SwapCached: 0 kB' 'Active: 7280628 kB' 'Inactive: 3493052 kB' 'Active(anon): 6892744 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506584 kB' 'Mapped: 201928 kB' 'Shmem: 6389400 kB' 'KReclaimable: 232168 kB' 'Slab: 803484 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 571316 kB' 'KernelStack: 20640 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8383016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315048 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.841 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175496732 kB' 'MemAvailable: 178349300 kB' 'Buffers: 3896 kB' 'Cached: 10266456 kB' 'SwapCached: 0 kB' 'Active: 7281208 kB' 'Inactive: 3493052 kB' 'Active(anon): 6893324 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507116 kB' 'Mapped: 201928 kB' 'Shmem: 6389416 kB' 'KReclaimable: 232168 kB' 'Slab: 803388 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 571220 kB' 'KernelStack: 20976 kB' 'PageTables: 10112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8383036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315112 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.842 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.843 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:42.844 nr_hugepages=1024 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.844 resv_hugepages=0 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.844 surplus_hugepages=0 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.844 anon_hugepages=0 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175495296 kB' 'MemAvailable: 178347864 kB' 'Buffers: 3896 kB' 'Cached: 10266456 kB' 'SwapCached: 0 kB' 'Active: 7281028 kB' 'Inactive: 3493052 kB' 'Active(anon): 6893144 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506956 kB' 'Mapped: 201928 kB' 'Shmem: 6389416 kB' 'KReclaimable: 232168 kB' 'Slab: 803324 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 571156 kB' 'KernelStack: 20944 kB' 'PageTables: 9824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8383060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315096 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.844 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.845 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87092500 kB' 'MemUsed: 10570184 kB' 'SwapCached: 0 kB' 'Active: 4768880 kB' 'Inactive: 3320472 kB' 'Active(anon): 4522376 kB' 'Inactive(anon): 0 kB' 'Active(file): 246504 kB' 'Inactive(file): 3320472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7917408 kB' 'Mapped: 93340 kB' 'AnonPages: 175016 kB' 'Shmem: 4350432 kB' 'KernelStack: 12088 kB' 'PageTables: 4964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96284 kB' 'Slab: 343660 kB' 'SReclaimable: 96284 kB' 'SUnreclaim: 247376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.846 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718492 kB' 'MemFree: 88402476 kB' 'MemUsed: 5316016 kB' 'SwapCached: 0 kB' 'Active: 2512604 kB' 'Inactive: 172580 kB' 'Active(anon): 2371224 kB' 'Inactive(anon): 0 kB' 'Active(file): 141380 kB' 'Inactive(file): 172580 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2353008 kB' 'Mapped: 108588 kB' 'AnonPages: 332332 kB' 'Shmem: 2039048 kB' 'KernelStack: 8680 kB' 'PageTables: 4840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135884 kB' 'Slab: 459664 kB' 'SReclaimable: 135884 kB' 'SUnreclaim: 323780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.847 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:42.848 node0=512 expecting 512 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:42.848 node1=512 expecting 512 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:42.848 00:03:42.848 real 0m2.724s 00:03:42.848 user 0m1.066s 00:03:42.848 sys 0m1.628s 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.848 23:07:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:42.848 ************************************ 00:03:42.848 END TEST even_2G_alloc 00:03:42.848 ************************************ 00:03:42.848 23:07:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:42.848 23:07:51 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:42.848 23:07:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.848 23:07:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.848 23:07:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.848 ************************************ 00:03:42.848 START TEST odd_alloc 00:03:42.848 ************************************ 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:42.848 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.849 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:43.118 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:43.118 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:43.118 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.118 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:43.118 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:43.118 23:07:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:43.118 23:07:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.118 23:07:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:45.024 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:45.024 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:45.024 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:45.024 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:45.024 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:45.024 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:45.024 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:45.024 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:45.024 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:45.024 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:45.024 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:45.024 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:45.024 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:45.024 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:45.024 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:45.024 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:45.024 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175482408 kB' 'MemAvailable: 178334976 kB' 'Buffers: 3896 kB' 'Cached: 10266580 kB' 'SwapCached: 0 kB' 'Active: 7287044 kB' 'Inactive: 3493052 kB' 'Active(anon): 6899160 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512344 kB' 'Mapped: 202544 kB' 'Shmem: 6389540 kB' 'KReclaimable: 232168 kB' 'Slab: 803572 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 571404 kB' 'KernelStack: 20480 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 8388044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314988 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.290 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175483352 kB' 'MemAvailable: 178335920 kB' 'Buffers: 3896 kB' 'Cached: 10266584 kB' 'SwapCached: 0 kB' 'Active: 7281896 kB' 'Inactive: 3493052 kB' 'Active(anon): 6894012 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507232 kB' 'Mapped: 202424 kB' 'Shmem: 6389544 kB' 'KReclaimable: 232168 kB' 'Slab: 803556 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 571388 kB' 'KernelStack: 20544 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 8394712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314968 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.291 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.292 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175489028 kB' 'MemAvailable: 178341596 kB' 'Buffers: 3896 kB' 'Cached: 10266584 kB' 'SwapCached: 0 kB' 'Active: 7279168 kB' 'Inactive: 3493052 kB' 'Active(anon): 6891284 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505072 kB' 'Mapped: 201924 kB' 'Shmem: 6389544 kB' 'KReclaimable: 232168 kB' 'Slab: 803508 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 571340 kB' 'KernelStack: 20496 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 8383088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314920 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.293 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.294 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:45.295 nr_hugepages=1025 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.295 resv_hugepages=0 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.295 surplus_hugepages=0 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.295 anon_hugepages=0 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175487748 kB' 'MemAvailable: 178340316 kB' 'Buffers: 3896 kB' 'Cached: 10266620 kB' 'SwapCached: 0 kB' 'Active: 7281024 kB' 'Inactive: 3493052 kB' 'Active(anon): 6893140 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506804 kB' 'Mapped: 201924 kB' 'Shmem: 6389580 kB' 'KReclaimable: 232168 kB' 'Slab: 803508 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 571340 kB' 'KernelStack: 20592 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 8383240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315160 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.295 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.296 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87070160 kB' 'MemUsed: 10592524 kB' 'SwapCached: 0 kB' 'Active: 4768796 kB' 'Inactive: 3320472 kB' 'Active(anon): 4522292 kB' 'Inactive(anon): 0 kB' 'Active(file): 246504 kB' 'Inactive(file): 3320472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7917476 kB' 'Mapped: 93328 kB' 'AnonPages: 174948 kB' 'Shmem: 4350500 kB' 'KernelStack: 12072 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96284 kB' 'Slab: 343968 kB' 'SReclaimable: 96284 kB' 'SUnreclaim: 247684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.297 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718492 kB' 'MemFree: 88415144 kB' 'MemUsed: 5303348 kB' 'SwapCached: 0 kB' 'Active: 2512584 kB' 'Inactive: 172580 kB' 'Active(anon): 2371204 kB' 'Inactive(anon): 0 kB' 'Active(file): 141380 kB' 'Inactive(file): 172580 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2353068 kB' 'Mapped: 108596 kB' 'AnonPages: 332172 kB' 'Shmem: 2039108 kB' 'KernelStack: 8664 kB' 'PageTables: 4776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135884 kB' 'Slab: 459540 kB' 'SReclaimable: 135884 kB' 'SUnreclaim: 323656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.298 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:45.299 node0=512 expecting 513 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:45.299 node1=513 expecting 512 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:45.299 00:03:45.299 real 0m2.429s 00:03:45.299 user 0m0.917s 00:03:45.299 sys 0m1.501s 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.299 23:07:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.299 ************************************ 00:03:45.299 END TEST odd_alloc 00:03:45.299 ************************************ 00:03:45.558 23:07:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:45.558 23:07:54 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:45.558 23:07:54 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.558 23:07:54 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.558 23:07:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.558 ************************************ 00:03:45.558 START TEST custom_alloc 00:03:45.558 ************************************ 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.558 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.559 23:07:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:48.123 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.123 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:48.123 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.123 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.124 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.124 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.124 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.124 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.124 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.124 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.124 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.124 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.124 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.124 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.124 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.124 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.124 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 174451076 kB' 'MemAvailable: 177303644 kB' 'Buffers: 3896 kB' 'Cached: 10266736 kB' 'SwapCached: 0 kB' 'Active: 7281264 kB' 'Inactive: 3493052 kB' 'Active(anon): 6893380 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506940 kB' 'Mapped: 201992 kB' 'Shmem: 6389696 kB' 'KReclaimable: 232168 kB' 'Slab: 803036 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570868 kB' 'KernelStack: 20480 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 8381472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314920 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.124 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 174456232 kB' 'MemAvailable: 177308800 kB' 'Buffers: 3896 kB' 'Cached: 10266740 kB' 'SwapCached: 0 kB' 'Active: 7280764 kB' 'Inactive: 3493052 kB' 'Active(anon): 6892880 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506428 kB' 'Mapped: 201944 kB' 'Shmem: 6389700 kB' 'KReclaimable: 232168 kB' 'Slab: 803088 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570920 kB' 'KernelStack: 20432 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 8381492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314888 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.125 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.126 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 174456232 kB' 'MemAvailable: 177308800 kB' 'Buffers: 3896 kB' 'Cached: 10266752 kB' 'SwapCached: 0 kB' 'Active: 7281312 kB' 'Inactive: 3493052 kB' 'Active(anon): 6893428 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507008 kB' 'Mapped: 201944 kB' 'Shmem: 6389712 kB' 'KReclaimable: 232168 kB' 'Slab: 803088 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570920 kB' 'KernelStack: 20464 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 8384128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314856 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.127 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.128 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:48.128 nr_hugepages=1536 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.129 resv_hugepages=0 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.129 surplus_hugepages=0 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.129 anon_hugepages=0 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 174456740 kB' 'MemAvailable: 177309308 kB' 'Buffers: 3896 kB' 'Cached: 10266776 kB' 'SwapCached: 0 kB' 'Active: 7281456 kB' 'Inactive: 3493052 kB' 'Active(anon): 6893572 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507140 kB' 'Mapped: 201944 kB' 'Shmem: 6389736 kB' 'KReclaimable: 232168 kB' 'Slab: 803088 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570920 kB' 'KernelStack: 20400 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 8383028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314888 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.129 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87095252 kB' 'MemUsed: 10567432 kB' 'SwapCached: 0 kB' 'Active: 4767980 kB' 'Inactive: 3320472 kB' 'Active(anon): 4521476 kB' 'Inactive(anon): 0 kB' 'Active(file): 246504 kB' 'Inactive(file): 3320472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7917604 kB' 'Mapped: 93328 kB' 'AnonPages: 174012 kB' 'Shmem: 4350628 kB' 'KernelStack: 11736 kB' 'PageTables: 3780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96284 kB' 'Slab: 343444 kB' 'SReclaimable: 96284 kB' 'SUnreclaim: 247160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.130 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.131 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718492 kB' 'MemFree: 87360472 kB' 'MemUsed: 6358020 kB' 'SwapCached: 0 kB' 'Active: 2513512 kB' 'Inactive: 172580 kB' 'Active(anon): 2372132 kB' 'Inactive(anon): 0 kB' 'Active(file): 141380 kB' 'Inactive(file): 172580 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2353092 kB' 'Mapped: 108616 kB' 'AnonPages: 333148 kB' 'Shmem: 2039132 kB' 'KernelStack: 8824 kB' 'PageTables: 5156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135884 kB' 'Slab: 459636 kB' 'SReclaimable: 135884 kB' 'SUnreclaim: 323752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:48.133 node0=512 expecting 512 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:48.133 node1=1024 expecting 1024 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:48.133 00:03:48.133 real 0m2.571s 00:03:48.133 user 0m0.983s 00:03:48.133 sys 0m1.564s 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.133 23:07:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.133 ************************************ 00:03:48.133 END TEST custom_alloc 00:03:48.133 ************************************ 00:03:48.133 23:07:56 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:48.133 23:07:56 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:48.133 23:07:56 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.133 23:07:56 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.133 23:07:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.133 ************************************ 00:03:48.133 START TEST no_shrink_alloc 00:03:48.133 ************************************ 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.133 23:07:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.674 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:50.674 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:50.674 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:50.674 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:50.674 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:50.674 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:50.674 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:50.674 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:50.674 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:50.674 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:50.674 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:50.674 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:50.674 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:50.674 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:50.674 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:50.674 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:50.674 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.674 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175522208 kB' 'MemAvailable: 178374776 kB' 'Buffers: 3896 kB' 'Cached: 10266880 kB' 'SwapCached: 0 kB' 'Active: 7281312 kB' 'Inactive: 3493052 kB' 'Active(anon): 6893428 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506820 kB' 'Mapped: 202060 kB' 'Shmem: 6389840 kB' 'KReclaimable: 232168 kB' 'Slab: 802536 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570368 kB' 'KernelStack: 20448 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8382240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315000 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.675 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175526128 kB' 'MemAvailable: 178378696 kB' 'Buffers: 3896 kB' 'Cached: 10266880 kB' 'SwapCached: 0 kB' 'Active: 7281476 kB' 'Inactive: 3493052 kB' 'Active(anon): 6893592 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506988 kB' 'Mapped: 202036 kB' 'Shmem: 6389840 kB' 'KReclaimable: 232168 kB' 'Slab: 802504 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570336 kB' 'KernelStack: 20400 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8382256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314984 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.676 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.677 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175525612 kB' 'MemAvailable: 178378180 kB' 'Buffers: 3896 kB' 'Cached: 10266900 kB' 'SwapCached: 0 kB' 'Active: 7281356 kB' 'Inactive: 3493052 kB' 'Active(anon): 6893472 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506892 kB' 'Mapped: 201960 kB' 'Shmem: 6389860 kB' 'KReclaimable: 232168 kB' 'Slab: 802504 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570336 kB' 'KernelStack: 20448 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8382280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314952 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.678 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.679 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.680 nr_hugepages=1024 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.680 resv_hugepages=0 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.680 surplus_hugepages=0 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.680 anon_hugepages=0 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175524352 kB' 'MemAvailable: 178376920 kB' 'Buffers: 3896 kB' 'Cached: 10266920 kB' 'SwapCached: 0 kB' 'Active: 7282148 kB' 'Inactive: 3493052 kB' 'Active(anon): 6894264 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507636 kB' 'Mapped: 202464 kB' 'Shmem: 6389880 kB' 'KReclaimable: 232168 kB' 'Slab: 802504 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570336 kB' 'KernelStack: 20416 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8383920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314920 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.680 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.681 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86056212 kB' 'MemUsed: 11606472 kB' 'SwapCached: 0 kB' 'Active: 4772272 kB' 'Inactive: 3320472 kB' 'Active(anon): 4525768 kB' 'Inactive(anon): 0 kB' 'Active(file): 246504 kB' 'Inactive(file): 3320472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7917704 kB' 'Mapped: 93832 kB' 'AnonPages: 177712 kB' 'Shmem: 4350728 kB' 'KernelStack: 11768 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96284 kB' 'Slab: 343372 kB' 'SReclaimable: 96284 kB' 'SUnreclaim: 247088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.682 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:50.683 node0=1024 expecting 1024 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.683 23:07:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:53.236 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:53.236 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:53.236 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:53.236 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:53.236 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:53.236 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.236 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.236 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:53.236 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:53.236 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:53.236 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:53.236 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:53.236 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:53.236 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.236 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.236 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:53.236 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:53.236 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175503764 kB' 'MemAvailable: 178356332 kB' 'Buffers: 3896 kB' 'Cached: 10267004 kB' 'SwapCached: 0 kB' 'Active: 7282208 kB' 'Inactive: 3493052 kB' 'Active(anon): 6894324 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507632 kB' 'Mapped: 202020 kB' 'Shmem: 6389964 kB' 'KReclaimable: 232168 kB' 'Slab: 802380 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570212 kB' 'KernelStack: 20544 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8382256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314984 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.236 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.237 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175503680 kB' 'MemAvailable: 178356248 kB' 'Buffers: 3896 kB' 'Cached: 10267008 kB' 'SwapCached: 0 kB' 'Active: 7282068 kB' 'Inactive: 3493052 kB' 'Active(anon): 6894184 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507436 kB' 'Mapped: 201976 kB' 'Shmem: 6389968 kB' 'KReclaimable: 232168 kB' 'Slab: 802360 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570192 kB' 'KernelStack: 20496 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8382272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314936 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.238 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175504300 kB' 'MemAvailable: 178356868 kB' 'Buffers: 3896 kB' 'Cached: 10267028 kB' 'SwapCached: 0 kB' 'Active: 7281932 kB' 'Inactive: 3493052 kB' 'Active(anon): 6894048 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507276 kB' 'Mapped: 201976 kB' 'Shmem: 6389988 kB' 'KReclaimable: 232168 kB' 'Slab: 802388 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570220 kB' 'KernelStack: 20496 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8382428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314936 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.239 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.240 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:53.241 nr_hugepages=1024 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.241 resv_hugepages=0 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.241 surplus_hugepages=0 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.241 anon_hugepages=0 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 175503292 kB' 'MemAvailable: 178355860 kB' 'Buffers: 3896 kB' 'Cached: 10267048 kB' 'SwapCached: 0 kB' 'Active: 7281916 kB' 'Inactive: 3493052 kB' 'Active(anon): 6894032 kB' 'Inactive(anon): 0 kB' 'Active(file): 387884 kB' 'Inactive(file): 3493052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507240 kB' 'Mapped: 201976 kB' 'Shmem: 6390008 kB' 'KReclaimable: 232168 kB' 'Slab: 802388 kB' 'SReclaimable: 232168 kB' 'SUnreclaim: 570220 kB' 'KernelStack: 20480 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 8382452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314936 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 10516480 kB' 'DirectMap1G: 188743680 kB' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.241 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.242 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86042252 kB' 'MemUsed: 11620432 kB' 'SwapCached: 0 kB' 'Active: 4769092 kB' 'Inactive: 3320472 kB' 'Active(anon): 4522588 kB' 'Inactive(anon): 0 kB' 'Active(file): 246504 kB' 'Inactive(file): 3320472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7917828 kB' 'Mapped: 93336 kB' 'AnonPages: 174880 kB' 'Shmem: 4350852 kB' 'KernelStack: 11832 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96284 kB' 'Slab: 343172 kB' 'SReclaimable: 96284 kB' 'SUnreclaim: 246888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.243 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.244 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.245 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.245 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.245 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.245 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.245 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.245 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.245 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.245 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.245 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.245 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.245 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.245 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.245 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:53.245 node0=1024 expecting 1024 00:03:53.245 23:08:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:53.245 00:03:53.245 real 0m5.138s 00:03:53.245 user 0m1.949s 00:03:53.245 sys 0m3.208s 00:03:53.245 23:08:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.245 23:08:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:53.245 ************************************ 00:03:53.245 END TEST no_shrink_alloc 00:03:53.245 ************************************ 00:03:53.245 23:08:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:53.245 23:08:02 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:53.245 23:08:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:53.245 23:08:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:53.245 23:08:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.245 23:08:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:53.245 23:08:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.245 23:08:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:53.245 23:08:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:53.245 23:08:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.245 23:08:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:53.245 23:08:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.245 23:08:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:53.245 23:08:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:53.245 23:08:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:53.245 00:03:53.245 real 0m19.953s 00:03:53.245 user 0m7.476s 00:03:53.245 sys 0m11.717s 00:03:53.245 23:08:02 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.245 23:08:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:53.245 ************************************ 00:03:53.245 END TEST hugepages 00:03:53.245 ************************************ 00:03:53.245 23:08:02 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:53.245 23:08:02 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:53.245 23:08:02 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.245 23:08:02 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.245 23:08:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:53.245 ************************************ 00:03:53.245 START TEST driver 00:03:53.245 ************************************ 00:03:53.245 23:08:02 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:53.504 * Looking for test storage... 00:03:53.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:53.504 23:08:02 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:53.504 23:08:02 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.504 23:08:02 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.701 23:08:06 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:57.701 23:08:06 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.701 23:08:06 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.701 23:08:06 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:57.701 ************************************ 00:03:57.701 START TEST guess_driver 00:03:57.701 ************************************ 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:57.701 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:57.701 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:57.701 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:57.701 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:57.701 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:57.701 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:57.701 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:57.701 Looking for driver=vfio-pci 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.701 23:08:06 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.610 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.870 23:08:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.809 23:08:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.809 23:08:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.809 23:08:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.809 23:08:09 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:00.809 23:08:09 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:00.809 23:08:09 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.809 23:08:09 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:05.005 00:04:05.005 real 0m7.063s 00:04:05.005 user 0m1.895s 00:04:05.005 sys 0m3.565s 00:04:05.005 23:08:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.005 23:08:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:05.005 ************************************ 00:04:05.005 END TEST guess_driver 00:04:05.005 ************************************ 00:04:05.005 23:08:13 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:05.005 00:04:05.005 real 0m10.980s 00:04:05.005 user 0m3.076s 00:04:05.005 sys 0m5.553s 00:04:05.005 23:08:13 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.005 23:08:13 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:05.005 ************************************ 00:04:05.005 END TEST driver 00:04:05.005 ************************************ 00:04:05.005 23:08:13 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:05.005 23:08:13 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:05.005 23:08:13 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.005 23:08:13 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.005 23:08:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:05.005 ************************************ 00:04:05.005 START TEST devices 00:04:05.005 ************************************ 00:04:05.005 23:08:13 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:05.005 * Looking for test storage... 00:04:05.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:05.005 23:08:13 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:05.005 23:08:13 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:05.005 23:08:13 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.005 23:08:13 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.599 23:08:16 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:07.599 23:08:16 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:07.599 23:08:16 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:07.599 23:08:16 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:07.599 23:08:16 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:07.599 23:08:16 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:07.599 23:08:16 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:07.599 23:08:16 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:07.599 23:08:16 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:07.599 23:08:16 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:07.599 23:08:16 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:07.599 23:08:16 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:07.599 23:08:16 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:07.599 23:08:16 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:07.599 23:08:16 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:07.599 23:08:16 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:07.599 23:08:16 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:07.599 23:08:16 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:07.599 23:08:16 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:07.599 23:08:16 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:07.599 23:08:16 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:07.599 23:08:16 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:07.599 No valid GPT data, bailing 00:04:07.599 23:08:16 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:07.599 23:08:16 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:07.599 23:08:16 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:07.599 23:08:16 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:07.599 23:08:16 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:07.600 23:08:16 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:07.600 23:08:16 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:07.600 23:08:16 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:07.600 23:08:16 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:07.600 23:08:16 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:07.600 23:08:16 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:07.600 23:08:16 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:07.600 23:08:16 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:07.600 23:08:16 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.600 23:08:16 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.600 23:08:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:07.600 ************************************ 00:04:07.600 START TEST nvme_mount 00:04:07.600 ************************************ 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:07.600 23:08:16 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:08.539 Creating new GPT entries in memory. 00:04:08.539 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:08.539 other utilities. 00:04:08.539 23:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:08.539 23:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:08.539 23:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:08.539 23:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:08.539 23:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:09.486 Creating new GPT entries in memory. 00:04:09.486 The operation has completed successfully. 00:04:09.486 23:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:09.486 23:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:09.486 23:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2200808 00:04:09.486 23:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.486 23:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:09.486 23:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.486 23:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:09.486 23:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:09.486 23:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.486 23:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:09.486 23:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:09.486 23:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:09.744 23:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.744 23:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:09.744 23:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:09.744 23:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:09.744 23:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:09.744 23:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:09.744 23:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.744 23:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:09.744 23:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:09.744 23:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.745 23:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.277 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.277 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:12.277 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:12.277 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.277 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.277 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.277 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.277 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.277 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.277 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.277 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.277 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.277 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.277 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.277 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.277 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.277 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.278 23:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.278 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.278 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:12.278 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.278 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.278 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.278 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:12.278 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.278 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.278 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.278 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:12.278 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:12.278 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:12.278 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:12.537 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:12.537 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:12.537 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:12.537 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.537 23:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.075 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.335 23:08:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.874 23:08:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.134 23:08:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:18.134 23:08:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:18.134 23:08:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:18.134 23:08:27 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:18.134 23:08:27 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.134 23:08:27 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.134 23:08:27 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.134 23:08:27 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:18.134 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:18.134 00:04:18.134 real 0m10.608s 00:04:18.134 user 0m3.153s 00:04:18.134 sys 0m5.274s 00:04:18.134 23:08:27 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.134 23:08:27 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:18.134 ************************************ 00:04:18.134 END TEST nvme_mount 00:04:18.134 ************************************ 00:04:18.134 23:08:27 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:18.134 23:08:27 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:18.134 23:08:27 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.134 23:08:27 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.134 23:08:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:18.134 ************************************ 00:04:18.134 START TEST dm_mount 00:04:18.134 ************************************ 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:18.134 23:08:27 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:19.072 Creating new GPT entries in memory. 00:04:19.072 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:19.072 other utilities. 00:04:19.072 23:08:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:19.072 23:08:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.072 23:08:28 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:19.072 23:08:28 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:19.072 23:08:28 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:20.452 Creating new GPT entries in memory. 00:04:20.452 The operation has completed successfully. 00:04:20.452 23:08:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:20.452 23:08:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.452 23:08:29 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:20.452 23:08:29 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:20.452 23:08:29 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:21.390 The operation has completed successfully. 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2204998 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.390 23:08:30 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.929 23:08:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.466 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.726 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.726 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:26.726 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:26.726 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:26.726 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.726 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:26.726 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:26.726 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.726 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:26.726 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:26.726 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:26.726 23:08:35 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:26.726 00:04:26.726 real 0m8.604s 00:04:26.726 user 0m2.076s 00:04:26.726 sys 0m3.545s 00:04:26.726 23:08:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.726 23:08:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:26.726 ************************************ 00:04:26.726 END TEST dm_mount 00:04:26.726 ************************************ 00:04:26.726 23:08:35 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:26.726 23:08:35 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:26.726 23:08:35 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:26.726 23:08:35 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.726 23:08:35 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.726 23:08:35 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:26.726 23:08:35 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.726 23:08:35 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:26.985 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:26.985 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:26.985 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:26.985 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:26.985 23:08:36 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:26.985 23:08:36 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.985 23:08:36 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:26.985 23:08:36 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.985 23:08:36 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:26.986 23:08:36 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.986 23:08:36 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:26.986 00:04:26.986 real 0m22.713s 00:04:26.986 user 0m6.461s 00:04:26.986 sys 0m10.927s 00:04:26.986 23:08:36 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.986 23:08:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:26.986 ************************************ 00:04:26.986 END TEST devices 00:04:26.986 ************************************ 00:04:26.986 23:08:36 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:26.986 00:04:26.986 real 1m13.410s 00:04:26.986 user 0m23.568s 00:04:26.986 sys 0m39.937s 00:04:26.986 23:08:36 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.986 23:08:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:26.986 ************************************ 00:04:26.986 END TEST setup.sh 00:04:26.986 ************************************ 00:04:27.246 23:08:36 -- common/autotest_common.sh@1142 -- # return 0 00:04:27.246 23:08:36 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:29.153 Hugepages 00:04:29.153 node hugesize free / total 00:04:29.153 node0 1048576kB 0 / 0 00:04:29.153 node0 2048kB 2048 / 2048 00:04:29.153 node1 1048576kB 0 / 0 00:04:29.153 node1 2048kB 0 / 0 00:04:29.153 00:04:29.153 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:29.153 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:29.153 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:29.153 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:29.153 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:29.153 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:29.153 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:29.153 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:29.153 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:29.412 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:29.412 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:29.412 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:29.412 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:29.412 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:29.412 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:29.412 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:29.412 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:29.412 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:29.412 23:08:38 -- spdk/autotest.sh@130 -- # uname -s 00:04:29.412 23:08:38 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:29.412 23:08:38 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:29.412 23:08:38 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:31.365 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:31.365 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:31.365 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:31.365 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:31.633 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:31.633 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:31.633 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:31.633 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:31.633 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:31.633 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:31.633 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:31.633 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:31.633 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:31.633 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:31.633 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:31.633 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:32.603 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:32.603 23:08:41 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:33.536 23:08:42 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:33.536 23:08:42 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:33.536 23:08:42 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:33.536 23:08:42 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:33.536 23:08:42 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:33.536 23:08:42 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:33.536 23:08:42 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:33.536 23:08:42 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:33.536 23:08:42 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:33.794 23:08:42 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:33.794 23:08:42 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:33.794 23:08:42 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.321 Waiting for block devices as requested 00:04:36.321 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:36.321 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:36.321 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:36.321 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:36.321 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:36.321 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:36.321 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:36.321 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:36.580 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:36.580 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:36.580 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:36.839 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:36.839 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:36.839 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:36.839 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:37.097 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:37.097 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:37.097 23:08:46 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:37.097 23:08:46 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:37.097 23:08:46 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:37.097 23:08:46 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:37.098 23:08:46 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:37.098 23:08:46 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:37.098 23:08:46 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:37.098 23:08:46 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:37.098 23:08:46 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:37.098 23:08:46 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:37.098 23:08:46 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:37.098 23:08:46 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:37.098 23:08:46 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:37.098 23:08:46 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:37.098 23:08:46 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:37.098 23:08:46 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:37.098 23:08:46 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:37.098 23:08:46 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:37.098 23:08:46 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:37.098 23:08:46 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:37.098 23:08:46 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:37.098 23:08:46 -- common/autotest_common.sh@1557 -- # continue 00:04:37.098 23:08:46 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:37.098 23:08:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:37.098 23:08:46 -- common/autotest_common.sh@10 -- # set +x 00:04:37.098 23:08:46 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:37.098 23:08:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:37.098 23:08:46 -- common/autotest_common.sh@10 -- # set +x 00:04:37.098 23:08:46 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:39.629 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:39.629 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:39.629 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:39.629 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:39.629 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:39.629 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:39.629 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:39.629 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:39.629 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:39.629 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:39.629 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:39.629 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:39.629 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:39.629 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:39.629 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:39.629 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:40.568 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:40.568 23:08:49 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:40.568 23:08:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:40.568 23:08:49 -- common/autotest_common.sh@10 -- # set +x 00:04:40.568 23:08:49 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:40.568 23:08:49 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:40.568 23:08:49 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:40.828 23:08:49 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:40.828 23:08:49 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:40.828 23:08:49 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:40.828 23:08:49 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:40.828 23:08:49 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:40.828 23:08:49 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:40.828 23:08:49 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:40.828 23:08:49 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:40.828 23:08:49 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:40.828 23:08:49 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:40.828 23:08:49 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:40.828 23:08:49 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:40.828 23:08:49 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:40.828 23:08:49 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:40.828 23:08:49 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:40.828 23:08:49 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:04:40.828 23:08:49 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:04:40.828 23:08:49 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2213632 00:04:40.828 23:08:49 -- common/autotest_common.sh@1598 -- # waitforlisten 2213632 00:04:40.828 23:08:49 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.828 23:08:49 -- common/autotest_common.sh@829 -- # '[' -z 2213632 ']' 00:04:40.828 23:08:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.828 23:08:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.828 23:08:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.828 23:08:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.828 23:08:49 -- common/autotest_common.sh@10 -- # set +x 00:04:40.828 [2024-07-10 23:08:49.823852] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:04:40.828 [2024-07-10 23:08:49.823946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213632 ] 00:04:40.828 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.087 [2024-07-10 23:08:49.927741] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.087 [2024-07-10 23:08:50.149647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.025 23:08:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.025 23:08:51 -- common/autotest_common.sh@862 -- # return 0 00:04:42.025 23:08:51 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:42.025 23:08:51 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:42.025 23:08:51 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:45.315 nvme0n1 00:04:45.315 23:08:54 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:45.315 [2024-07-10 23:08:54.238319] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:45.315 request: 00:04:45.315 { 00:04:45.315 "nvme_ctrlr_name": "nvme0", 00:04:45.315 "password": "test", 00:04:45.315 "method": "bdev_nvme_opal_revert", 00:04:45.315 "req_id": 1 00:04:45.315 } 00:04:45.315 Got JSON-RPC error response 00:04:45.315 response: 00:04:45.315 { 00:04:45.315 "code": -32602, 00:04:45.315 "message": "Invalid parameters" 00:04:45.315 } 00:04:45.315 23:08:54 -- common/autotest_common.sh@1604 -- # true 00:04:45.315 23:08:54 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:45.315 23:08:54 -- common/autotest_common.sh@1608 -- # killprocess 2213632 00:04:45.315 23:08:54 -- common/autotest_common.sh@948 -- # '[' -z 2213632 ']' 00:04:45.315 23:08:54 -- common/autotest_common.sh@952 -- # kill -0 2213632 00:04:45.315 23:08:54 -- common/autotest_common.sh@953 -- # uname 00:04:45.315 23:08:54 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.315 23:08:54 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2213632 00:04:45.315 23:08:54 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.315 23:08:54 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.315 23:08:54 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2213632' 00:04:45.315 killing process with pid 2213632 00:04:45.315 23:08:54 -- common/autotest_common.sh@967 -- # kill 2213632 00:04:45.315 23:08:54 -- common/autotest_common.sh@972 -- # wait 2213632 00:04:49.512 23:08:57 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:49.512 23:08:57 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:49.512 23:08:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:49.512 23:08:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:49.512 23:08:57 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:49.512 23:08:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.512 23:08:57 -- common/autotest_common.sh@10 -- # set +x 00:04:49.512 23:08:57 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:49.512 23:08:57 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:49.512 23:08:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.512 23:08:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.512 23:08:57 -- common/autotest_common.sh@10 -- # set +x 00:04:49.512 ************************************ 00:04:49.512 START TEST env 00:04:49.512 ************************************ 00:04:49.512 23:08:57 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:49.512 * Looking for test storage... 00:04:49.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:49.512 23:08:58 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:49.512 23:08:58 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.512 23:08:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.512 23:08:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.512 ************************************ 00:04:49.512 START TEST env_memory 00:04:49.512 ************************************ 00:04:49.512 23:08:58 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:49.512 00:04:49.512 00:04:49.512 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.512 http://cunit.sourceforge.net/ 00:04:49.512 00:04:49.512 00:04:49.512 Suite: memory 00:04:49.512 Test: alloc and free memory map ...[2024-07-10 23:08:58.106884] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:49.512 passed 00:04:49.512 Test: mem map translation ...[2024-07-10 23:08:58.146379] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:49.512 [2024-07-10 23:08:58.146404] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:49.512 [2024-07-10 23:08:58.146469] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:49.512 [2024-07-10 23:08:58.146485] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:49.512 passed 00:04:49.512 Test: mem map registration ...[2024-07-10 23:08:58.208216] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:49.512 [2024-07-10 23:08:58.208241] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:49.512 passed 00:04:49.512 Test: mem map adjacent registrations ...passed 00:04:49.512 00:04:49.512 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.512 suites 1 1 n/a 0 0 00:04:49.512 tests 4 4 4 0 0 00:04:49.512 asserts 152 152 152 0 n/a 00:04:49.512 00:04:49.512 Elapsed time = 0.225 seconds 00:04:49.512 00:04:49.512 real 0m0.259s 00:04:49.512 user 0m0.238s 00:04:49.512 sys 0m0.020s 00:04:49.512 23:08:58 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.512 23:08:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:49.512 ************************************ 00:04:49.512 END TEST env_memory 00:04:49.512 ************************************ 00:04:49.512 23:08:58 env -- common/autotest_common.sh@1142 -- # return 0 00:04:49.512 23:08:58 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:49.512 23:08:58 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.512 23:08:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.512 23:08:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.512 ************************************ 00:04:49.512 START TEST env_vtophys 00:04:49.512 ************************************ 00:04:49.512 23:08:58 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:49.512 EAL: lib.eal log level changed from notice to debug 00:04:49.512 EAL: Detected lcore 0 as core 0 on socket 0 00:04:49.512 EAL: Detected lcore 1 as core 1 on socket 0 00:04:49.512 EAL: Detected lcore 2 as core 2 on socket 0 00:04:49.512 EAL: Detected lcore 3 as core 3 on socket 0 00:04:49.512 EAL: Detected lcore 4 as core 4 on socket 0 00:04:49.512 EAL: Detected lcore 5 as core 5 on socket 0 00:04:49.512 EAL: Detected lcore 6 as core 6 on socket 0 00:04:49.512 EAL: Detected lcore 7 as core 8 on socket 0 00:04:49.512 EAL: Detected lcore 8 as core 9 on socket 0 00:04:49.512 EAL: Detected lcore 9 as core 10 on socket 0 00:04:49.512 EAL: Detected lcore 10 as core 11 on socket 0 00:04:49.512 EAL: Detected lcore 11 as core 12 on socket 0 00:04:49.512 EAL: Detected lcore 12 as core 13 on socket 0 00:04:49.513 EAL: Detected lcore 13 as core 16 on socket 0 00:04:49.513 EAL: Detected lcore 14 as core 17 on socket 0 00:04:49.513 EAL: Detected lcore 15 as core 18 on socket 0 00:04:49.513 EAL: Detected lcore 16 as core 19 on socket 0 00:04:49.513 EAL: Detected lcore 17 as core 20 on socket 0 00:04:49.513 EAL: Detected lcore 18 as core 21 on socket 0 00:04:49.513 EAL: Detected lcore 19 as core 25 on socket 0 00:04:49.513 EAL: Detected lcore 20 as core 26 on socket 0 00:04:49.513 EAL: Detected lcore 21 as core 27 on socket 0 00:04:49.513 EAL: Detected lcore 22 as core 28 on socket 0 00:04:49.513 EAL: Detected lcore 23 as core 29 on socket 0 00:04:49.513 EAL: Detected lcore 24 as core 0 on socket 1 00:04:49.513 EAL: Detected lcore 25 as core 1 on socket 1 00:04:49.513 EAL: Detected lcore 26 as core 2 on socket 1 00:04:49.513 EAL: Detected lcore 27 as core 3 on socket 1 00:04:49.513 EAL: Detected lcore 28 as core 4 on socket 1 00:04:49.513 EAL: Detected lcore 29 as core 5 on socket 1 00:04:49.513 EAL: Detected lcore 30 as core 6 on socket 1 00:04:49.513 EAL: Detected lcore 31 as core 9 on socket 1 00:04:49.513 EAL: Detected lcore 32 as core 10 on socket 1 00:04:49.513 EAL: Detected lcore 33 as core 11 on socket 1 00:04:49.513 EAL: Detected lcore 34 as core 12 on socket 1 00:04:49.513 EAL: Detected lcore 35 as core 13 on socket 1 00:04:49.513 EAL: Detected lcore 36 as core 16 on socket 1 00:04:49.513 EAL: Detected lcore 37 as core 17 on socket 1 00:04:49.513 EAL: Detected lcore 38 as core 18 on socket 1 00:04:49.513 EAL: Detected lcore 39 as core 19 on socket 1 00:04:49.513 EAL: Detected lcore 40 as core 20 on socket 1 00:04:49.513 EAL: Detected lcore 41 as core 21 on socket 1 00:04:49.513 EAL: Detected lcore 42 as core 24 on socket 1 00:04:49.513 EAL: Detected lcore 43 as core 25 on socket 1 00:04:49.513 EAL: Detected lcore 44 as core 26 on socket 1 00:04:49.513 EAL: Detected lcore 45 as core 27 on socket 1 00:04:49.513 EAL: Detected lcore 46 as core 28 on socket 1 00:04:49.513 EAL: Detected lcore 47 as core 29 on socket 1 00:04:49.513 EAL: Detected lcore 48 as core 0 on socket 0 00:04:49.513 EAL: Detected lcore 49 as core 1 on socket 0 00:04:49.513 EAL: Detected lcore 50 as core 2 on socket 0 00:04:49.513 EAL: Detected lcore 51 as core 3 on socket 0 00:04:49.513 EAL: Detected lcore 52 as core 4 on socket 0 00:04:49.513 EAL: Detected lcore 53 as core 5 on socket 0 00:04:49.513 EAL: Detected lcore 54 as core 6 on socket 0 00:04:49.513 EAL: Detected lcore 55 as core 8 on socket 0 00:04:49.513 EAL: Detected lcore 56 as core 9 on socket 0 00:04:49.513 EAL: Detected lcore 57 as core 10 on socket 0 00:04:49.513 EAL: Detected lcore 58 as core 11 on socket 0 00:04:49.513 EAL: Detected lcore 59 as core 12 on socket 0 00:04:49.513 EAL: Detected lcore 60 as core 13 on socket 0 00:04:49.513 EAL: Detected lcore 61 as core 16 on socket 0 00:04:49.513 EAL: Detected lcore 62 as core 17 on socket 0 00:04:49.513 EAL: Detected lcore 63 as core 18 on socket 0 00:04:49.513 EAL: Detected lcore 64 as core 19 on socket 0 00:04:49.513 EAL: Detected lcore 65 as core 20 on socket 0 00:04:49.513 EAL: Detected lcore 66 as core 21 on socket 0 00:04:49.513 EAL: Detected lcore 67 as core 25 on socket 0 00:04:49.513 EAL: Detected lcore 68 as core 26 on socket 0 00:04:49.513 EAL: Detected lcore 69 as core 27 on socket 0 00:04:49.513 EAL: Detected lcore 70 as core 28 on socket 0 00:04:49.513 EAL: Detected lcore 71 as core 29 on socket 0 00:04:49.513 EAL: Detected lcore 72 as core 0 on socket 1 00:04:49.513 EAL: Detected lcore 73 as core 1 on socket 1 00:04:49.513 EAL: Detected lcore 74 as core 2 on socket 1 00:04:49.513 EAL: Detected lcore 75 as core 3 on socket 1 00:04:49.513 EAL: Detected lcore 76 as core 4 on socket 1 00:04:49.513 EAL: Detected lcore 77 as core 5 on socket 1 00:04:49.513 EAL: Detected lcore 78 as core 6 on socket 1 00:04:49.513 EAL: Detected lcore 79 as core 9 on socket 1 00:04:49.513 EAL: Detected lcore 80 as core 10 on socket 1 00:04:49.513 EAL: Detected lcore 81 as core 11 on socket 1 00:04:49.513 EAL: Detected lcore 82 as core 12 on socket 1 00:04:49.513 EAL: Detected lcore 83 as core 13 on socket 1 00:04:49.513 EAL: Detected lcore 84 as core 16 on socket 1 00:04:49.513 EAL: Detected lcore 85 as core 17 on socket 1 00:04:49.513 EAL: Detected lcore 86 as core 18 on socket 1 00:04:49.513 EAL: Detected lcore 87 as core 19 on socket 1 00:04:49.513 EAL: Detected lcore 88 as core 20 on socket 1 00:04:49.513 EAL: Detected lcore 89 as core 21 on socket 1 00:04:49.513 EAL: Detected lcore 90 as core 24 on socket 1 00:04:49.513 EAL: Detected lcore 91 as core 25 on socket 1 00:04:49.513 EAL: Detected lcore 92 as core 26 on socket 1 00:04:49.513 EAL: Detected lcore 93 as core 27 on socket 1 00:04:49.513 EAL: Detected lcore 94 as core 28 on socket 1 00:04:49.513 EAL: Detected lcore 95 as core 29 on socket 1 00:04:49.513 EAL: Maximum logical cores by configuration: 128 00:04:49.513 EAL: Detected CPU lcores: 96 00:04:49.513 EAL: Detected NUMA nodes: 2 00:04:49.513 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:49.513 EAL: Detected shared linkage of DPDK 00:04:49.513 EAL: No shared files mode enabled, IPC will be disabled 00:04:49.513 EAL: Bus pci wants IOVA as 'DC' 00:04:49.513 EAL: Buses did not request a specific IOVA mode. 00:04:49.513 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:49.513 EAL: Selected IOVA mode 'VA' 00:04:49.513 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.513 EAL: Probing VFIO support... 00:04:49.513 EAL: IOMMU type 1 (Type 1) is supported 00:04:49.513 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:49.513 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:49.513 EAL: VFIO support initialized 00:04:49.513 EAL: Ask a virtual area of 0x2e000 bytes 00:04:49.513 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:49.513 EAL: Setting up physically contiguous memory... 00:04:49.513 EAL: Setting maximum number of open files to 524288 00:04:49.513 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:49.513 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:49.513 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:49.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.513 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:49.513 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.513 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:49.513 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:49.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.513 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:49.513 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.513 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:49.513 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:49.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.513 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:49.513 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.513 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:49.513 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:49.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.513 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:49.513 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.513 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:49.513 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:49.513 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:49.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.513 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:49.513 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.513 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:49.513 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:49.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.513 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:49.513 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.513 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:49.513 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:49.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.513 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:49.513 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.513 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:49.513 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:49.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.513 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:49.513 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.513 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:49.513 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:49.513 EAL: Hugepages will be freed exactly as allocated. 00:04:49.513 EAL: No shared files mode enabled, IPC is disabled 00:04:49.513 EAL: No shared files mode enabled, IPC is disabled 00:04:49.513 EAL: TSC frequency is ~2300000 KHz 00:04:49.513 EAL: Main lcore 0 is ready (tid=7f9af1609a40;cpuset=[0]) 00:04:49.513 EAL: Trying to obtain current memory policy. 00:04:49.513 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.513 EAL: Restoring previous memory policy: 0 00:04:49.513 EAL: request: mp_malloc_sync 00:04:49.513 EAL: No shared files mode enabled, IPC is disabled 00:04:49.513 EAL: Heap on socket 0 was expanded by 2MB 00:04:49.513 EAL: No shared files mode enabled, IPC is disabled 00:04:49.513 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:49.513 EAL: Mem event callback 'spdk:(nil)' registered 00:04:49.513 00:04:49.513 00:04:49.513 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.513 http://cunit.sourceforge.net/ 00:04:49.513 00:04:49.513 00:04:49.513 Suite: components_suite 00:04:49.773 Test: vtophys_malloc_test ...passed 00:04:49.773 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:49.773 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.773 EAL: Restoring previous memory policy: 4 00:04:49.773 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.773 EAL: request: mp_malloc_sync 00:04:49.773 EAL: No shared files mode enabled, IPC is disabled 00:04:49.773 EAL: Heap on socket 0 was expanded by 4MB 00:04:49.773 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.773 EAL: request: mp_malloc_sync 00:04:49.773 EAL: No shared files mode enabled, IPC is disabled 00:04:49.773 EAL: Heap on socket 0 was shrunk by 4MB 00:04:49.773 EAL: Trying to obtain current memory policy. 00:04:49.773 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.773 EAL: Restoring previous memory policy: 4 00:04:49.773 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.773 EAL: request: mp_malloc_sync 00:04:49.773 EAL: No shared files mode enabled, IPC is disabled 00:04:49.773 EAL: Heap on socket 0 was expanded by 6MB 00:04:49.773 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.773 EAL: request: mp_malloc_sync 00:04:49.773 EAL: No shared files mode enabled, IPC is disabled 00:04:49.773 EAL: Heap on socket 0 was shrunk by 6MB 00:04:49.773 EAL: Trying to obtain current memory policy. 00:04:49.773 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.773 EAL: Restoring previous memory policy: 4 00:04:49.773 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.773 EAL: request: mp_malloc_sync 00:04:49.773 EAL: No shared files mode enabled, IPC is disabled 00:04:49.773 EAL: Heap on socket 0 was expanded by 10MB 00:04:50.033 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.033 EAL: request: mp_malloc_sync 00:04:50.033 EAL: No shared files mode enabled, IPC is disabled 00:04:50.033 EAL: Heap on socket 0 was shrunk by 10MB 00:04:50.033 EAL: Trying to obtain current memory policy. 00:04:50.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.033 EAL: Restoring previous memory policy: 4 00:04:50.033 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.033 EAL: request: mp_malloc_sync 00:04:50.033 EAL: No shared files mode enabled, IPC is disabled 00:04:50.033 EAL: Heap on socket 0 was expanded by 18MB 00:04:50.033 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.033 EAL: request: mp_malloc_sync 00:04:50.033 EAL: No shared files mode enabled, IPC is disabled 00:04:50.033 EAL: Heap on socket 0 was shrunk by 18MB 00:04:50.033 EAL: Trying to obtain current memory policy. 00:04:50.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.033 EAL: Restoring previous memory policy: 4 00:04:50.033 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.033 EAL: request: mp_malloc_sync 00:04:50.033 EAL: No shared files mode enabled, IPC is disabled 00:04:50.033 EAL: Heap on socket 0 was expanded by 34MB 00:04:50.033 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.033 EAL: request: mp_malloc_sync 00:04:50.033 EAL: No shared files mode enabled, IPC is disabled 00:04:50.033 EAL: Heap on socket 0 was shrunk by 34MB 00:04:50.033 EAL: Trying to obtain current memory policy. 00:04:50.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.033 EAL: Restoring previous memory policy: 4 00:04:50.033 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.033 EAL: request: mp_malloc_sync 00:04:50.033 EAL: No shared files mode enabled, IPC is disabled 00:04:50.033 EAL: Heap on socket 0 was expanded by 66MB 00:04:50.291 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.291 EAL: request: mp_malloc_sync 00:04:50.291 EAL: No shared files mode enabled, IPC is disabled 00:04:50.291 EAL: Heap on socket 0 was shrunk by 66MB 00:04:50.291 EAL: Trying to obtain current memory policy. 00:04:50.291 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.291 EAL: Restoring previous memory policy: 4 00:04:50.291 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.291 EAL: request: mp_malloc_sync 00:04:50.291 EAL: No shared files mode enabled, IPC is disabled 00:04:50.291 EAL: Heap on socket 0 was expanded by 130MB 00:04:50.550 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.809 EAL: request: mp_malloc_sync 00:04:50.809 EAL: No shared files mode enabled, IPC is disabled 00:04:50.809 EAL: Heap on socket 0 was shrunk by 130MB 00:04:50.809 EAL: Trying to obtain current memory policy. 00:04:50.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.809 EAL: Restoring previous memory policy: 4 00:04:50.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.809 EAL: request: mp_malloc_sync 00:04:50.809 EAL: No shared files mode enabled, IPC is disabled 00:04:50.809 EAL: Heap on socket 0 was expanded by 258MB 00:04:51.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.378 EAL: request: mp_malloc_sync 00:04:51.378 EAL: No shared files mode enabled, IPC is disabled 00:04:51.378 EAL: Heap on socket 0 was shrunk by 258MB 00:04:51.945 EAL: Trying to obtain current memory policy. 00:04:51.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.945 EAL: Restoring previous memory policy: 4 00:04:51.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.945 EAL: request: mp_malloc_sync 00:04:51.945 EAL: No shared files mode enabled, IPC is disabled 00:04:51.945 EAL: Heap on socket 0 was expanded by 514MB 00:04:53.324 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.324 EAL: request: mp_malloc_sync 00:04:53.324 EAL: No shared files mode enabled, IPC is disabled 00:04:53.324 EAL: Heap on socket 0 was shrunk by 514MB 00:04:53.893 EAL: Trying to obtain current memory policy. 00:04:53.893 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.182 EAL: Restoring previous memory policy: 4 00:04:54.182 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.182 EAL: request: mp_malloc_sync 00:04:54.182 EAL: No shared files mode enabled, IPC is disabled 00:04:54.182 EAL: Heap on socket 0 was expanded by 1026MB 00:04:56.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.717 EAL: request: mp_malloc_sync 00:04:56.717 EAL: No shared files mode enabled, IPC is disabled 00:04:56.717 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:58.091 passed 00:04:58.091 00:04:58.091 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.091 suites 1 1 n/a 0 0 00:04:58.091 tests 2 2 2 0 0 00:04:58.091 asserts 497 497 497 0 n/a 00:04:58.091 00:04:58.091 Elapsed time = 8.524 seconds 00:04:58.091 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.091 EAL: request: mp_malloc_sync 00:04:58.091 EAL: No shared files mode enabled, IPC is disabled 00:04:58.091 EAL: Heap on socket 0 was shrunk by 2MB 00:04:58.091 EAL: No shared files mode enabled, IPC is disabled 00:04:58.091 EAL: No shared files mode enabled, IPC is disabled 00:04:58.091 EAL: No shared files mode enabled, IPC is disabled 00:04:58.091 00:04:58.091 real 0m8.757s 00:04:58.091 user 0m7.972s 00:04:58.091 sys 0m0.730s 00:04:58.091 23:09:07 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.091 23:09:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:58.091 ************************************ 00:04:58.091 END TEST env_vtophys 00:04:58.091 ************************************ 00:04:58.349 23:09:07 env -- common/autotest_common.sh@1142 -- # return 0 00:04:58.349 23:09:07 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:58.349 23:09:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.349 23:09:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.349 23:09:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.349 ************************************ 00:04:58.349 START TEST env_pci 00:04:58.349 ************************************ 00:04:58.349 23:09:07 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:58.349 00:04:58.349 00:04:58.349 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.349 http://cunit.sourceforge.net/ 00:04:58.349 00:04:58.349 00:04:58.349 Suite: pci 00:04:58.349 Test: pci_hook ...[2024-07-10 23:09:07.227540] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2216815 has claimed it 00:04:58.349 EAL: Cannot find device (10000:00:01.0) 00:04:58.349 EAL: Failed to attach device on primary process 00:04:58.349 passed 00:04:58.349 00:04:58.349 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.349 suites 1 1 n/a 0 0 00:04:58.349 tests 1 1 1 0 0 00:04:58.349 asserts 25 25 25 0 n/a 00:04:58.349 00:04:58.349 Elapsed time = 0.045 seconds 00:04:58.349 00:04:58.349 real 0m0.121s 00:04:58.349 user 0m0.049s 00:04:58.349 sys 0m0.071s 00:04:58.349 23:09:07 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.349 23:09:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:58.349 ************************************ 00:04:58.349 END TEST env_pci 00:04:58.349 ************************************ 00:04:58.349 23:09:07 env -- common/autotest_common.sh@1142 -- # return 0 00:04:58.349 23:09:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:58.349 23:09:07 env -- env/env.sh@15 -- # uname 00:04:58.349 23:09:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:58.349 23:09:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:58.349 23:09:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:58.349 23:09:07 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:58.349 23:09:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.349 23:09:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.349 ************************************ 00:04:58.349 START TEST env_dpdk_post_init 00:04:58.349 ************************************ 00:04:58.349 23:09:07 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:58.608 EAL: Detected CPU lcores: 96 00:04:58.608 EAL: Detected NUMA nodes: 2 00:04:58.608 EAL: Detected shared linkage of DPDK 00:04:58.608 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:58.608 EAL: Selected IOVA mode 'VA' 00:04:58.608 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.608 EAL: VFIO support initialized 00:04:58.608 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:58.608 EAL: Using IOMMU type 1 (Type 1) 00:04:58.608 EAL: Ignore mapping IO port bar(1) 00:04:58.608 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:58.608 EAL: Ignore mapping IO port bar(1) 00:04:58.608 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:58.608 EAL: Ignore mapping IO port bar(1) 00:04:58.608 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:58.608 EAL: Ignore mapping IO port bar(1) 00:04:58.608 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:58.608 EAL: Ignore mapping IO port bar(1) 00:04:58.608 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:58.608 EAL: Ignore mapping IO port bar(1) 00:04:58.608 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:58.608 EAL: Ignore mapping IO port bar(1) 00:04:58.608 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:58.867 EAL: Ignore mapping IO port bar(1) 00:04:58.867 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:59.434 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:59.434 EAL: Ignore mapping IO port bar(1) 00:04:59.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:59.434 EAL: Ignore mapping IO port bar(1) 00:04:59.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:59.434 EAL: Ignore mapping IO port bar(1) 00:04:59.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:59.434 EAL: Ignore mapping IO port bar(1) 00:04:59.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:59.434 EAL: Ignore mapping IO port bar(1) 00:04:59.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:59.434 EAL: Ignore mapping IO port bar(1) 00:04:59.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:59.434 EAL: Ignore mapping IO port bar(1) 00:04:59.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:59.693 EAL: Ignore mapping IO port bar(1) 00:04:59.693 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:02.983 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:02.983 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:02.983 Starting DPDK initialization... 00:05:02.983 Starting SPDK post initialization... 00:05:02.983 SPDK NVMe probe 00:05:02.983 Attaching to 0000:5e:00.0 00:05:02.983 Attached to 0000:5e:00.0 00:05:02.983 Cleaning up... 00:05:02.983 00:05:02.983 real 0m4.473s 00:05:02.983 user 0m3.359s 00:05:02.983 sys 0m0.188s 00:05:02.983 23:09:11 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.983 23:09:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:02.983 ************************************ 00:05:02.983 END TEST env_dpdk_post_init 00:05:02.983 ************************************ 00:05:02.983 23:09:11 env -- common/autotest_common.sh@1142 -- # return 0 00:05:02.983 23:09:11 env -- env/env.sh@26 -- # uname 00:05:02.983 23:09:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:02.983 23:09:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:02.983 23:09:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.983 23:09:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.983 23:09:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.983 ************************************ 00:05:02.983 START TEST env_mem_callbacks 00:05:02.983 ************************************ 00:05:02.983 23:09:11 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:02.983 EAL: Detected CPU lcores: 96 00:05:02.983 EAL: Detected NUMA nodes: 2 00:05:02.983 EAL: Detected shared linkage of DPDK 00:05:02.983 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:02.983 EAL: Selected IOVA mode 'VA' 00:05:02.983 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.983 EAL: VFIO support initialized 00:05:02.983 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:02.983 00:05:02.983 00:05:02.983 CUnit - A unit testing framework for C - Version 2.1-3 00:05:02.983 http://cunit.sourceforge.net/ 00:05:02.983 00:05:02.983 00:05:02.983 Suite: memory 00:05:02.983 Test: test ... 00:05:02.983 register 0x200000200000 2097152 00:05:02.983 malloc 3145728 00:05:02.983 register 0x200000400000 4194304 00:05:02.983 buf 0x2000004fffc0 len 3145728 PASSED 00:05:02.983 malloc 64 00:05:02.983 buf 0x2000004ffec0 len 64 PASSED 00:05:02.983 malloc 4194304 00:05:02.983 register 0x200000800000 6291456 00:05:02.983 buf 0x2000009fffc0 len 4194304 PASSED 00:05:02.983 free 0x2000004fffc0 3145728 00:05:02.983 free 0x2000004ffec0 64 00:05:02.983 unregister 0x200000400000 4194304 PASSED 00:05:02.983 free 0x2000009fffc0 4194304 00:05:03.243 unregister 0x200000800000 6291456 PASSED 00:05:03.243 malloc 8388608 00:05:03.243 register 0x200000400000 10485760 00:05:03.243 buf 0x2000005fffc0 len 8388608 PASSED 00:05:03.243 free 0x2000005fffc0 8388608 00:05:03.243 unregister 0x200000400000 10485760 PASSED 00:05:03.243 passed 00:05:03.243 00:05:03.243 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.243 suites 1 1 n/a 0 0 00:05:03.243 tests 1 1 1 0 0 00:05:03.243 asserts 15 15 15 0 n/a 00:05:03.243 00:05:03.243 Elapsed time = 0.073 seconds 00:05:03.243 00:05:03.243 real 0m0.183s 00:05:03.243 user 0m0.112s 00:05:03.243 sys 0m0.070s 00:05:03.243 23:09:12 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.243 23:09:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:03.243 ************************************ 00:05:03.243 END TEST env_mem_callbacks 00:05:03.243 ************************************ 00:05:03.243 23:09:12 env -- common/autotest_common.sh@1142 -- # return 0 00:05:03.243 00:05:03.243 real 0m14.211s 00:05:03.243 user 0m11.893s 00:05:03.243 sys 0m1.367s 00:05:03.243 23:09:12 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.243 23:09:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.243 ************************************ 00:05:03.243 END TEST env 00:05:03.243 ************************************ 00:05:03.243 23:09:12 -- common/autotest_common.sh@1142 -- # return 0 00:05:03.243 23:09:12 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:03.243 23:09:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.243 23:09:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.243 23:09:12 -- common/autotest_common.sh@10 -- # set +x 00:05:03.243 ************************************ 00:05:03.243 START TEST rpc 00:05:03.243 ************************************ 00:05:03.243 23:09:12 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:03.243 * Looking for test storage... 00:05:03.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:03.243 23:09:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2218252 00:05:03.243 23:09:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.243 23:09:12 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:03.243 23:09:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2218252 00:05:03.243 23:09:12 rpc -- common/autotest_common.sh@829 -- # '[' -z 2218252 ']' 00:05:03.243 23:09:12 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.243 23:09:12 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.243 23:09:12 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.243 23:09:12 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.243 23:09:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.503 [2024-07-10 23:09:12.366452] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:03.503 [2024-07-10 23:09:12.366560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218252 ] 00:05:03.503 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.503 [2024-07-10 23:09:12.469229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.762 [2024-07-10 23:09:12.671948] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:03.762 [2024-07-10 23:09:12.671998] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2218252' to capture a snapshot of events at runtime. 00:05:03.762 [2024-07-10 23:09:12.672009] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:03.762 [2024-07-10 23:09:12.672022] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:03.762 [2024-07-10 23:09:12.672030] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2218252 for offline analysis/debug. 00:05:03.762 [2024-07-10 23:09:12.672062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.697 23:09:13 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.697 23:09:13 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:04.697 23:09:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:04.697 23:09:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:04.697 23:09:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:04.697 23:09:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:04.697 23:09:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.697 23:09:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.697 23:09:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.697 ************************************ 00:05:04.697 START TEST rpc_integrity 00:05:04.697 ************************************ 00:05:04.697 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:04.697 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:04.697 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.697 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.697 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.697 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:04.697 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:04.697 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:04.697 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:04.697 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.697 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.697 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.697 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:04.697 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:04.698 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.698 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.698 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.698 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:04.698 { 00:05:04.698 "name": "Malloc0", 00:05:04.698 "aliases": [ 00:05:04.698 "2e08ba6d-d222-4828-ae11-11b2645b260f" 00:05:04.698 ], 00:05:04.698 "product_name": "Malloc disk", 00:05:04.698 "block_size": 512, 00:05:04.698 "num_blocks": 16384, 00:05:04.698 "uuid": "2e08ba6d-d222-4828-ae11-11b2645b260f", 00:05:04.698 "assigned_rate_limits": { 00:05:04.698 "rw_ios_per_sec": 0, 00:05:04.698 "rw_mbytes_per_sec": 0, 00:05:04.698 "r_mbytes_per_sec": 0, 00:05:04.698 "w_mbytes_per_sec": 0 00:05:04.698 }, 00:05:04.698 "claimed": false, 00:05:04.698 "zoned": false, 00:05:04.698 "supported_io_types": { 00:05:04.698 "read": true, 00:05:04.698 "write": true, 00:05:04.698 "unmap": true, 00:05:04.698 "flush": true, 00:05:04.698 "reset": true, 00:05:04.698 "nvme_admin": false, 00:05:04.698 "nvme_io": false, 00:05:04.698 "nvme_io_md": false, 00:05:04.698 "write_zeroes": true, 00:05:04.698 "zcopy": true, 00:05:04.698 "get_zone_info": false, 00:05:04.698 "zone_management": false, 00:05:04.698 "zone_append": false, 00:05:04.698 "compare": false, 00:05:04.698 "compare_and_write": false, 00:05:04.698 "abort": true, 00:05:04.698 "seek_hole": false, 00:05:04.698 "seek_data": false, 00:05:04.698 "copy": true, 00:05:04.698 "nvme_iov_md": false 00:05:04.698 }, 00:05:04.698 "memory_domains": [ 00:05:04.698 { 00:05:04.698 "dma_device_id": "system", 00:05:04.698 "dma_device_type": 1 00:05:04.698 }, 00:05:04.698 { 00:05:04.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.698 "dma_device_type": 2 00:05:04.698 } 00:05:04.698 ], 00:05:04.698 "driver_specific": {} 00:05:04.698 } 00:05:04.698 ]' 00:05:04.698 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:04.698 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:04.698 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:04.698 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.698 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.698 [2024-07-10 23:09:13.750359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:04.698 [2024-07-10 23:09:13.750418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:04.698 [2024-07-10 23:09:13.750442] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022e80 00:05:04.698 [2024-07-10 23:09:13.750455] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:04.698 [2024-07-10 23:09:13.752410] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:04.698 [2024-07-10 23:09:13.752451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:04.698 Passthru0 00:05:04.698 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.698 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:04.698 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.698 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.957 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.957 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:04.957 { 00:05:04.957 "name": "Malloc0", 00:05:04.957 "aliases": [ 00:05:04.957 "2e08ba6d-d222-4828-ae11-11b2645b260f" 00:05:04.957 ], 00:05:04.957 "product_name": "Malloc disk", 00:05:04.957 "block_size": 512, 00:05:04.957 "num_blocks": 16384, 00:05:04.957 "uuid": "2e08ba6d-d222-4828-ae11-11b2645b260f", 00:05:04.957 "assigned_rate_limits": { 00:05:04.957 "rw_ios_per_sec": 0, 00:05:04.957 "rw_mbytes_per_sec": 0, 00:05:04.957 "r_mbytes_per_sec": 0, 00:05:04.957 "w_mbytes_per_sec": 0 00:05:04.957 }, 00:05:04.957 "claimed": true, 00:05:04.957 "claim_type": "exclusive_write", 00:05:04.957 "zoned": false, 00:05:04.957 "supported_io_types": { 00:05:04.957 "read": true, 00:05:04.957 "write": true, 00:05:04.957 "unmap": true, 00:05:04.957 "flush": true, 00:05:04.957 "reset": true, 00:05:04.957 "nvme_admin": false, 00:05:04.957 "nvme_io": false, 00:05:04.957 "nvme_io_md": false, 00:05:04.957 "write_zeroes": true, 00:05:04.957 "zcopy": true, 00:05:04.957 "get_zone_info": false, 00:05:04.957 "zone_management": false, 00:05:04.957 "zone_append": false, 00:05:04.957 "compare": false, 00:05:04.957 "compare_and_write": false, 00:05:04.957 "abort": true, 00:05:04.957 "seek_hole": false, 00:05:04.957 "seek_data": false, 00:05:04.957 "copy": true, 00:05:04.957 "nvme_iov_md": false 00:05:04.957 }, 00:05:04.957 "memory_domains": [ 00:05:04.957 { 00:05:04.957 "dma_device_id": "system", 00:05:04.957 "dma_device_type": 1 00:05:04.957 }, 00:05:04.957 { 00:05:04.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.957 "dma_device_type": 2 00:05:04.957 } 00:05:04.957 ], 00:05:04.957 "driver_specific": {} 00:05:04.957 }, 00:05:04.957 { 00:05:04.957 "name": "Passthru0", 00:05:04.957 "aliases": [ 00:05:04.957 "625ac5d2-3fc9-5249-bdfb-22a4e84c1c77" 00:05:04.957 ], 00:05:04.957 "product_name": "passthru", 00:05:04.957 "block_size": 512, 00:05:04.957 "num_blocks": 16384, 00:05:04.957 "uuid": "625ac5d2-3fc9-5249-bdfb-22a4e84c1c77", 00:05:04.957 "assigned_rate_limits": { 00:05:04.957 "rw_ios_per_sec": 0, 00:05:04.957 "rw_mbytes_per_sec": 0, 00:05:04.957 "r_mbytes_per_sec": 0, 00:05:04.957 "w_mbytes_per_sec": 0 00:05:04.957 }, 00:05:04.957 "claimed": false, 00:05:04.957 "zoned": false, 00:05:04.957 "supported_io_types": { 00:05:04.957 "read": true, 00:05:04.957 "write": true, 00:05:04.957 "unmap": true, 00:05:04.957 "flush": true, 00:05:04.957 "reset": true, 00:05:04.957 "nvme_admin": false, 00:05:04.957 "nvme_io": false, 00:05:04.957 "nvme_io_md": false, 00:05:04.957 "write_zeroes": true, 00:05:04.957 "zcopy": true, 00:05:04.957 "get_zone_info": false, 00:05:04.957 "zone_management": false, 00:05:04.957 "zone_append": false, 00:05:04.957 "compare": false, 00:05:04.957 "compare_and_write": false, 00:05:04.957 "abort": true, 00:05:04.957 "seek_hole": false, 00:05:04.957 "seek_data": false, 00:05:04.957 "copy": true, 00:05:04.957 "nvme_iov_md": false 00:05:04.957 }, 00:05:04.957 "memory_domains": [ 00:05:04.957 { 00:05:04.957 "dma_device_id": "system", 00:05:04.957 "dma_device_type": 1 00:05:04.957 }, 00:05:04.957 { 00:05:04.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.957 "dma_device_type": 2 00:05:04.957 } 00:05:04.957 ], 00:05:04.957 "driver_specific": { 00:05:04.957 "passthru": { 00:05:04.957 "name": "Passthru0", 00:05:04.957 "base_bdev_name": "Malloc0" 00:05:04.957 } 00:05:04.957 } 00:05:04.957 } 00:05:04.957 ]' 00:05:04.957 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:04.957 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:04.957 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:04.957 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.957 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.957 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.957 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:04.957 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.957 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.957 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.957 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:04.957 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.957 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.957 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.957 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:04.957 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:04.957 23:09:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:04.957 00:05:04.957 real 0m0.302s 00:05:04.957 user 0m0.168s 00:05:04.957 sys 0m0.033s 00:05:04.957 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.957 23:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.957 ************************************ 00:05:04.957 END TEST rpc_integrity 00:05:04.957 ************************************ 00:05:04.957 23:09:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:04.957 23:09:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:04.957 23:09:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.957 23:09:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.957 23:09:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.957 ************************************ 00:05:04.957 START TEST rpc_plugins 00:05:04.957 ************************************ 00:05:04.957 23:09:13 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:04.957 23:09:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:04.957 23:09:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.957 23:09:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:04.957 23:09:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.957 23:09:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:04.957 23:09:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:04.957 23:09:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.957 23:09:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:04.957 23:09:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.957 23:09:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:04.957 { 00:05:04.957 "name": "Malloc1", 00:05:04.957 "aliases": [ 00:05:04.957 "197ce161-0eb3-4a12-b1f6-abad34b7759c" 00:05:04.957 ], 00:05:04.957 "product_name": "Malloc disk", 00:05:04.957 "block_size": 4096, 00:05:04.957 "num_blocks": 256, 00:05:04.957 "uuid": "197ce161-0eb3-4a12-b1f6-abad34b7759c", 00:05:04.957 "assigned_rate_limits": { 00:05:04.957 "rw_ios_per_sec": 0, 00:05:04.957 "rw_mbytes_per_sec": 0, 00:05:04.957 "r_mbytes_per_sec": 0, 00:05:04.957 "w_mbytes_per_sec": 0 00:05:04.957 }, 00:05:04.957 "claimed": false, 00:05:04.957 "zoned": false, 00:05:04.957 "supported_io_types": { 00:05:04.957 "read": true, 00:05:04.957 "write": true, 00:05:04.957 "unmap": true, 00:05:04.957 "flush": true, 00:05:04.957 "reset": true, 00:05:04.957 "nvme_admin": false, 00:05:04.957 "nvme_io": false, 00:05:04.957 "nvme_io_md": false, 00:05:04.957 "write_zeroes": true, 00:05:04.957 "zcopy": true, 00:05:04.957 "get_zone_info": false, 00:05:04.957 "zone_management": false, 00:05:04.957 "zone_append": false, 00:05:04.957 "compare": false, 00:05:04.957 "compare_and_write": false, 00:05:04.957 "abort": true, 00:05:04.957 "seek_hole": false, 00:05:04.957 "seek_data": false, 00:05:04.957 "copy": true, 00:05:04.957 "nvme_iov_md": false 00:05:04.957 }, 00:05:04.957 "memory_domains": [ 00:05:04.957 { 00:05:04.957 "dma_device_id": "system", 00:05:04.957 "dma_device_type": 1 00:05:04.957 }, 00:05:04.957 { 00:05:04.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.957 "dma_device_type": 2 00:05:04.957 } 00:05:04.957 ], 00:05:04.957 "driver_specific": {} 00:05:04.957 } 00:05:04.957 ]' 00:05:04.957 23:09:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:05.216 23:09:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:05.216 23:09:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:05.216 23:09:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.216 23:09:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.216 23:09:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.216 23:09:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:05.216 23:09:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.216 23:09:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.216 23:09:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.216 23:09:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:05.216 23:09:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:05.216 23:09:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:05.216 00:05:05.216 real 0m0.138s 00:05:05.216 user 0m0.083s 00:05:05.216 sys 0m0.014s 00:05:05.216 23:09:14 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.216 23:09:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.216 ************************************ 00:05:05.216 END TEST rpc_plugins 00:05:05.216 ************************************ 00:05:05.216 23:09:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:05.216 23:09:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:05.216 23:09:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.216 23:09:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.216 23:09:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.216 ************************************ 00:05:05.216 START TEST rpc_trace_cmd_test 00:05:05.216 ************************************ 00:05:05.216 23:09:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:05.216 23:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:05.216 23:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:05.216 23:09:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.216 23:09:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:05.216 23:09:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.216 23:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:05.216 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2218252", 00:05:05.216 "tpoint_group_mask": "0x8", 00:05:05.216 "iscsi_conn": { 00:05:05.216 "mask": "0x2", 00:05:05.216 "tpoint_mask": "0x0" 00:05:05.216 }, 00:05:05.216 "scsi": { 00:05:05.216 "mask": "0x4", 00:05:05.216 "tpoint_mask": "0x0" 00:05:05.216 }, 00:05:05.216 "bdev": { 00:05:05.216 "mask": "0x8", 00:05:05.216 "tpoint_mask": "0xffffffffffffffff" 00:05:05.216 }, 00:05:05.216 "nvmf_rdma": { 00:05:05.216 "mask": "0x10", 00:05:05.216 "tpoint_mask": "0x0" 00:05:05.216 }, 00:05:05.216 "nvmf_tcp": { 00:05:05.216 "mask": "0x20", 00:05:05.216 "tpoint_mask": "0x0" 00:05:05.216 }, 00:05:05.216 "ftl": { 00:05:05.216 "mask": "0x40", 00:05:05.216 "tpoint_mask": "0x0" 00:05:05.216 }, 00:05:05.216 "blobfs": { 00:05:05.216 "mask": "0x80", 00:05:05.216 "tpoint_mask": "0x0" 00:05:05.216 }, 00:05:05.216 "dsa": { 00:05:05.216 "mask": "0x200", 00:05:05.216 "tpoint_mask": "0x0" 00:05:05.216 }, 00:05:05.216 "thread": { 00:05:05.216 "mask": "0x400", 00:05:05.216 "tpoint_mask": "0x0" 00:05:05.216 }, 00:05:05.216 "nvme_pcie": { 00:05:05.216 "mask": "0x800", 00:05:05.216 "tpoint_mask": "0x0" 00:05:05.216 }, 00:05:05.216 "iaa": { 00:05:05.216 "mask": "0x1000", 00:05:05.216 "tpoint_mask": "0x0" 00:05:05.216 }, 00:05:05.216 "nvme_tcp": { 00:05:05.216 "mask": "0x2000", 00:05:05.216 "tpoint_mask": "0x0" 00:05:05.216 }, 00:05:05.216 "bdev_nvme": { 00:05:05.216 "mask": "0x4000", 00:05:05.216 "tpoint_mask": "0x0" 00:05:05.216 }, 00:05:05.216 "sock": { 00:05:05.216 "mask": "0x8000", 00:05:05.216 "tpoint_mask": "0x0" 00:05:05.216 } 00:05:05.216 }' 00:05:05.216 23:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:05.216 23:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:05.216 23:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:05.216 23:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:05.216 23:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:05.475 23:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:05.475 23:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:05.475 23:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:05.475 23:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:05.475 23:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:05.475 00:05:05.475 real 0m0.191s 00:05:05.475 user 0m0.161s 00:05:05.475 sys 0m0.024s 00:05:05.475 23:09:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.475 23:09:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:05.475 ************************************ 00:05:05.475 END TEST rpc_trace_cmd_test 00:05:05.475 ************************************ 00:05:05.475 23:09:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:05.475 23:09:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:05.475 23:09:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:05.475 23:09:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:05.475 23:09:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.475 23:09:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.475 23:09:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.475 ************************************ 00:05:05.475 START TEST rpc_daemon_integrity 00:05:05.475 ************************************ 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:05.475 { 00:05:05.475 "name": "Malloc2", 00:05:05.475 "aliases": [ 00:05:05.475 "440436d6-80bc-4dba-945b-2095c26848c5" 00:05:05.475 ], 00:05:05.475 "product_name": "Malloc disk", 00:05:05.475 "block_size": 512, 00:05:05.475 "num_blocks": 16384, 00:05:05.475 "uuid": "440436d6-80bc-4dba-945b-2095c26848c5", 00:05:05.475 "assigned_rate_limits": { 00:05:05.475 "rw_ios_per_sec": 0, 00:05:05.475 "rw_mbytes_per_sec": 0, 00:05:05.475 "r_mbytes_per_sec": 0, 00:05:05.475 "w_mbytes_per_sec": 0 00:05:05.475 }, 00:05:05.475 "claimed": false, 00:05:05.475 "zoned": false, 00:05:05.475 "supported_io_types": { 00:05:05.475 "read": true, 00:05:05.475 "write": true, 00:05:05.475 "unmap": true, 00:05:05.475 "flush": true, 00:05:05.475 "reset": true, 00:05:05.475 "nvme_admin": false, 00:05:05.475 "nvme_io": false, 00:05:05.475 "nvme_io_md": false, 00:05:05.475 "write_zeroes": true, 00:05:05.475 "zcopy": true, 00:05:05.475 "get_zone_info": false, 00:05:05.475 "zone_management": false, 00:05:05.475 "zone_append": false, 00:05:05.475 "compare": false, 00:05:05.475 "compare_and_write": false, 00:05:05.475 "abort": true, 00:05:05.475 "seek_hole": false, 00:05:05.475 "seek_data": false, 00:05:05.475 "copy": true, 00:05:05.475 "nvme_iov_md": false 00:05:05.475 }, 00:05:05.475 "memory_domains": [ 00:05:05.475 { 00:05:05.475 "dma_device_id": "system", 00:05:05.475 "dma_device_type": 1 00:05:05.475 }, 00:05:05.475 { 00:05:05.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.475 "dma_device_type": 2 00:05:05.475 } 00:05:05.475 ], 00:05:05.475 "driver_specific": {} 00:05:05.475 } 00:05:05.475 ]' 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.475 [2024-07-10 23:09:14.537804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:05.475 [2024-07-10 23:09:14.537855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:05.475 [2024-07-10 23:09:14.537876] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000024080 00:05:05.475 [2024-07-10 23:09:14.537891] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:05.475 [2024-07-10 23:09:14.539786] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:05.475 [2024-07-10 23:09:14.539816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:05.475 Passthru0 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:05.475 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.734 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.734 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.734 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:05.734 { 00:05:05.734 "name": "Malloc2", 00:05:05.734 "aliases": [ 00:05:05.734 "440436d6-80bc-4dba-945b-2095c26848c5" 00:05:05.734 ], 00:05:05.734 "product_name": "Malloc disk", 00:05:05.734 "block_size": 512, 00:05:05.734 "num_blocks": 16384, 00:05:05.735 "uuid": "440436d6-80bc-4dba-945b-2095c26848c5", 00:05:05.735 "assigned_rate_limits": { 00:05:05.735 "rw_ios_per_sec": 0, 00:05:05.735 "rw_mbytes_per_sec": 0, 00:05:05.735 "r_mbytes_per_sec": 0, 00:05:05.735 "w_mbytes_per_sec": 0 00:05:05.735 }, 00:05:05.735 "claimed": true, 00:05:05.735 "claim_type": "exclusive_write", 00:05:05.735 "zoned": false, 00:05:05.735 "supported_io_types": { 00:05:05.735 "read": true, 00:05:05.735 "write": true, 00:05:05.735 "unmap": true, 00:05:05.735 "flush": true, 00:05:05.735 "reset": true, 00:05:05.735 "nvme_admin": false, 00:05:05.735 "nvme_io": false, 00:05:05.735 "nvme_io_md": false, 00:05:05.735 "write_zeroes": true, 00:05:05.735 "zcopy": true, 00:05:05.735 "get_zone_info": false, 00:05:05.735 "zone_management": false, 00:05:05.735 "zone_append": false, 00:05:05.735 "compare": false, 00:05:05.735 "compare_and_write": false, 00:05:05.735 "abort": true, 00:05:05.735 "seek_hole": false, 00:05:05.735 "seek_data": false, 00:05:05.735 "copy": true, 00:05:05.735 "nvme_iov_md": false 00:05:05.735 }, 00:05:05.735 "memory_domains": [ 00:05:05.735 { 00:05:05.735 "dma_device_id": "system", 00:05:05.735 "dma_device_type": 1 00:05:05.735 }, 00:05:05.735 { 00:05:05.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.735 "dma_device_type": 2 00:05:05.735 } 00:05:05.735 ], 00:05:05.735 "driver_specific": {} 00:05:05.735 }, 00:05:05.735 { 00:05:05.735 "name": "Passthru0", 00:05:05.735 "aliases": [ 00:05:05.735 "092d1742-46d3-5f51-aeaf-8b96347856d9" 00:05:05.735 ], 00:05:05.735 "product_name": "passthru", 00:05:05.735 "block_size": 512, 00:05:05.735 "num_blocks": 16384, 00:05:05.735 "uuid": "092d1742-46d3-5f51-aeaf-8b96347856d9", 00:05:05.735 "assigned_rate_limits": { 00:05:05.735 "rw_ios_per_sec": 0, 00:05:05.735 "rw_mbytes_per_sec": 0, 00:05:05.735 "r_mbytes_per_sec": 0, 00:05:05.735 "w_mbytes_per_sec": 0 00:05:05.735 }, 00:05:05.735 "claimed": false, 00:05:05.735 "zoned": false, 00:05:05.735 "supported_io_types": { 00:05:05.735 "read": true, 00:05:05.735 "write": true, 00:05:05.735 "unmap": true, 00:05:05.735 "flush": true, 00:05:05.735 "reset": true, 00:05:05.735 "nvme_admin": false, 00:05:05.735 "nvme_io": false, 00:05:05.735 "nvme_io_md": false, 00:05:05.735 "write_zeroes": true, 00:05:05.735 "zcopy": true, 00:05:05.735 "get_zone_info": false, 00:05:05.735 "zone_management": false, 00:05:05.735 "zone_append": false, 00:05:05.735 "compare": false, 00:05:05.735 "compare_and_write": false, 00:05:05.735 "abort": true, 00:05:05.735 "seek_hole": false, 00:05:05.735 "seek_data": false, 00:05:05.735 "copy": true, 00:05:05.735 "nvme_iov_md": false 00:05:05.735 }, 00:05:05.735 "memory_domains": [ 00:05:05.735 { 00:05:05.735 "dma_device_id": "system", 00:05:05.735 "dma_device_type": 1 00:05:05.735 }, 00:05:05.735 { 00:05:05.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.735 "dma_device_type": 2 00:05:05.735 } 00:05:05.735 ], 00:05:05.735 "driver_specific": { 00:05:05.735 "passthru": { 00:05:05.735 "name": "Passthru0", 00:05:05.735 "base_bdev_name": "Malloc2" 00:05:05.735 } 00:05:05.735 } 00:05:05.735 } 00:05:05.735 ]' 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:05.735 00:05:05.735 real 0m0.266s 00:05:05.735 user 0m0.142s 00:05:05.735 sys 0m0.029s 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.735 23:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.735 ************************************ 00:05:05.735 END TEST rpc_daemon_integrity 00:05:05.735 ************************************ 00:05:05.735 23:09:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:05.735 23:09:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:05.735 23:09:14 rpc -- rpc/rpc.sh@84 -- # killprocess 2218252 00:05:05.735 23:09:14 rpc -- common/autotest_common.sh@948 -- # '[' -z 2218252 ']' 00:05:05.735 23:09:14 rpc -- common/autotest_common.sh@952 -- # kill -0 2218252 00:05:05.735 23:09:14 rpc -- common/autotest_common.sh@953 -- # uname 00:05:05.735 23:09:14 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.735 23:09:14 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2218252 00:05:05.735 23:09:14 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.735 23:09:14 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.735 23:09:14 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2218252' 00:05:05.735 killing process with pid 2218252 00:05:05.735 23:09:14 rpc -- common/autotest_common.sh@967 -- # kill 2218252 00:05:05.735 23:09:14 rpc -- common/autotest_common.sh@972 -- # wait 2218252 00:05:08.272 00:05:08.272 real 0m4.928s 00:05:08.272 user 0m5.456s 00:05:08.272 sys 0m0.748s 00:05:08.272 23:09:17 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.272 23:09:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.272 ************************************ 00:05:08.272 END TEST rpc 00:05:08.272 ************************************ 00:05:08.272 23:09:17 -- common/autotest_common.sh@1142 -- # return 0 00:05:08.272 23:09:17 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:08.272 23:09:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.272 23:09:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.272 23:09:17 -- common/autotest_common.sh@10 -- # set +x 00:05:08.272 ************************************ 00:05:08.272 START TEST skip_rpc 00:05:08.272 ************************************ 00:05:08.272 23:09:17 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:08.272 * Looking for test storage... 00:05:08.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:08.273 23:09:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:08.273 23:09:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:08.273 23:09:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:08.273 23:09:17 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.273 23:09:17 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.273 23:09:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.273 ************************************ 00:05:08.273 START TEST skip_rpc 00:05:08.273 ************************************ 00:05:08.273 23:09:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:08.273 23:09:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2219338 00:05:08.273 23:09:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.273 23:09:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:08.273 23:09:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:08.531 [2024-07-10 23:09:17.376622] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:08.531 [2024-07-10 23:09:17.376713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2219338 ] 00:05:08.531 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.531 [2024-07-10 23:09:17.476807] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.790 [2024-07-10 23:09:17.681704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2219338 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2219338 ']' 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2219338 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2219338 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2219338' 00:05:14.063 killing process with pid 2219338 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2219338 00:05:14.063 23:09:22 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2219338 00:05:15.993 00:05:15.993 real 0m7.498s 00:05:15.993 user 0m7.143s 00:05:15.993 sys 0m0.361s 00:05:15.993 23:09:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.993 23:09:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.993 ************************************ 00:05:15.993 END TEST skip_rpc 00:05:15.993 ************************************ 00:05:15.993 23:09:24 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:15.993 23:09:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:15.993 23:09:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.993 23:09:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.993 23:09:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.993 ************************************ 00:05:15.993 START TEST skip_rpc_with_json 00:05:15.993 ************************************ 00:05:15.993 23:09:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:15.993 23:09:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:15.993 23:09:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2220523 00:05:15.993 23:09:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.993 23:09:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2220523 00:05:15.993 23:09:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2220523 ']' 00:05:15.993 23:09:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.993 23:09:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.993 23:09:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.993 23:09:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.993 23:09:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.993 23:09:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.993 [2024-07-10 23:09:24.935841] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:15.993 [2024-07-10 23:09:24.935935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220523 ] 00:05:15.993 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.993 [2024-07-10 23:09:25.039319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.250 [2024-07-10 23:09:25.246395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.214 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.214 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:17.214 23:09:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:17.214 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.214 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.214 [2024-07-10 23:09:26.135935] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:17.214 request: 00:05:17.214 { 00:05:17.214 "trtype": "tcp", 00:05:17.214 "method": "nvmf_get_transports", 00:05:17.214 "req_id": 1 00:05:17.214 } 00:05:17.214 Got JSON-RPC error response 00:05:17.214 response: 00:05:17.214 { 00:05:17.214 "code": -19, 00:05:17.214 "message": "No such device" 00:05:17.214 } 00:05:17.214 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:17.214 23:09:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:17.214 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.214 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.214 [2024-07-10 23:09:26.144035] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.214 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.214 23:09:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:17.214 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.214 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.473 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.473 23:09:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:17.473 { 00:05:17.473 "subsystems": [ 00:05:17.473 { 00:05:17.473 "subsystem": "keyring", 00:05:17.473 "config": [] 00:05:17.473 }, 00:05:17.473 { 00:05:17.473 "subsystem": "iobuf", 00:05:17.473 "config": [ 00:05:17.473 { 00:05:17.473 "method": "iobuf_set_options", 00:05:17.473 "params": { 00:05:17.473 "small_pool_count": 8192, 00:05:17.473 "large_pool_count": 1024, 00:05:17.473 "small_bufsize": 8192, 00:05:17.473 "large_bufsize": 135168 00:05:17.473 } 00:05:17.473 } 00:05:17.473 ] 00:05:17.473 }, 00:05:17.473 { 00:05:17.473 "subsystem": "sock", 00:05:17.473 "config": [ 00:05:17.473 { 00:05:17.473 "method": "sock_set_default_impl", 00:05:17.473 "params": { 00:05:17.473 "impl_name": "posix" 00:05:17.473 } 00:05:17.473 }, 00:05:17.473 { 00:05:17.473 "method": "sock_impl_set_options", 00:05:17.473 "params": { 00:05:17.473 "impl_name": "ssl", 00:05:17.473 "recv_buf_size": 4096, 00:05:17.473 "send_buf_size": 4096, 00:05:17.473 "enable_recv_pipe": true, 00:05:17.473 "enable_quickack": false, 00:05:17.473 "enable_placement_id": 0, 00:05:17.473 "enable_zerocopy_send_server": true, 00:05:17.473 "enable_zerocopy_send_client": false, 00:05:17.473 "zerocopy_threshold": 0, 00:05:17.473 "tls_version": 0, 00:05:17.473 "enable_ktls": false 00:05:17.473 } 00:05:17.473 }, 00:05:17.473 { 00:05:17.473 "method": "sock_impl_set_options", 00:05:17.473 "params": { 00:05:17.473 "impl_name": "posix", 00:05:17.473 "recv_buf_size": 2097152, 00:05:17.473 "send_buf_size": 2097152, 00:05:17.473 "enable_recv_pipe": true, 00:05:17.473 "enable_quickack": false, 00:05:17.473 "enable_placement_id": 0, 00:05:17.473 "enable_zerocopy_send_server": true, 00:05:17.473 "enable_zerocopy_send_client": false, 00:05:17.473 "zerocopy_threshold": 0, 00:05:17.473 "tls_version": 0, 00:05:17.473 "enable_ktls": false 00:05:17.473 } 00:05:17.474 } 00:05:17.474 ] 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "subsystem": "vmd", 00:05:17.474 "config": [] 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "subsystem": "accel", 00:05:17.474 "config": [ 00:05:17.474 { 00:05:17.474 "method": "accel_set_options", 00:05:17.474 "params": { 00:05:17.474 "small_cache_size": 128, 00:05:17.474 "large_cache_size": 16, 00:05:17.474 "task_count": 2048, 00:05:17.474 "sequence_count": 2048, 00:05:17.474 "buf_count": 2048 00:05:17.474 } 00:05:17.474 } 00:05:17.474 ] 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "subsystem": "bdev", 00:05:17.474 "config": [ 00:05:17.474 { 00:05:17.474 "method": "bdev_set_options", 00:05:17.474 "params": { 00:05:17.474 "bdev_io_pool_size": 65535, 00:05:17.474 "bdev_io_cache_size": 256, 00:05:17.474 "bdev_auto_examine": true, 00:05:17.474 "iobuf_small_cache_size": 128, 00:05:17.474 "iobuf_large_cache_size": 16 00:05:17.474 } 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "method": "bdev_raid_set_options", 00:05:17.474 "params": { 00:05:17.474 "process_window_size_kb": 1024 00:05:17.474 } 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "method": "bdev_iscsi_set_options", 00:05:17.474 "params": { 00:05:17.474 "timeout_sec": 30 00:05:17.474 } 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "method": "bdev_nvme_set_options", 00:05:17.474 "params": { 00:05:17.474 "action_on_timeout": "none", 00:05:17.474 "timeout_us": 0, 00:05:17.474 "timeout_admin_us": 0, 00:05:17.474 "keep_alive_timeout_ms": 10000, 00:05:17.474 "arbitration_burst": 0, 00:05:17.474 "low_priority_weight": 0, 00:05:17.474 "medium_priority_weight": 0, 00:05:17.474 "high_priority_weight": 0, 00:05:17.474 "nvme_adminq_poll_period_us": 10000, 00:05:17.474 "nvme_ioq_poll_period_us": 0, 00:05:17.474 "io_queue_requests": 0, 00:05:17.474 "delay_cmd_submit": true, 00:05:17.474 "transport_retry_count": 4, 00:05:17.474 "bdev_retry_count": 3, 00:05:17.474 "transport_ack_timeout": 0, 00:05:17.474 "ctrlr_loss_timeout_sec": 0, 00:05:17.474 "reconnect_delay_sec": 0, 00:05:17.474 "fast_io_fail_timeout_sec": 0, 00:05:17.474 "disable_auto_failback": false, 00:05:17.474 "generate_uuids": false, 00:05:17.474 "transport_tos": 0, 00:05:17.474 "nvme_error_stat": false, 00:05:17.474 "rdma_srq_size": 0, 00:05:17.474 "io_path_stat": false, 00:05:17.474 "allow_accel_sequence": false, 00:05:17.474 "rdma_max_cq_size": 0, 00:05:17.474 "rdma_cm_event_timeout_ms": 0, 00:05:17.474 "dhchap_digests": [ 00:05:17.474 "sha256", 00:05:17.474 "sha384", 00:05:17.474 "sha512" 00:05:17.474 ], 00:05:17.474 "dhchap_dhgroups": [ 00:05:17.474 "null", 00:05:17.474 "ffdhe2048", 00:05:17.474 "ffdhe3072", 00:05:17.474 "ffdhe4096", 00:05:17.474 "ffdhe6144", 00:05:17.474 "ffdhe8192" 00:05:17.474 ] 00:05:17.474 } 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "method": "bdev_nvme_set_hotplug", 00:05:17.474 "params": { 00:05:17.474 "period_us": 100000, 00:05:17.474 "enable": false 00:05:17.474 } 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "method": "bdev_wait_for_examine" 00:05:17.474 } 00:05:17.474 ] 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "subsystem": "scsi", 00:05:17.474 "config": null 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "subsystem": "scheduler", 00:05:17.474 "config": [ 00:05:17.474 { 00:05:17.474 "method": "framework_set_scheduler", 00:05:17.474 "params": { 00:05:17.474 "name": "static" 00:05:17.474 } 00:05:17.474 } 00:05:17.474 ] 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "subsystem": "vhost_scsi", 00:05:17.474 "config": [] 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "subsystem": "vhost_blk", 00:05:17.474 "config": [] 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "subsystem": "ublk", 00:05:17.474 "config": [] 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "subsystem": "nbd", 00:05:17.474 "config": [] 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "subsystem": "nvmf", 00:05:17.474 "config": [ 00:05:17.474 { 00:05:17.474 "method": "nvmf_set_config", 00:05:17.474 "params": { 00:05:17.474 "discovery_filter": "match_any", 00:05:17.474 "admin_cmd_passthru": { 00:05:17.474 "identify_ctrlr": false 00:05:17.474 } 00:05:17.474 } 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "method": "nvmf_set_max_subsystems", 00:05:17.474 "params": { 00:05:17.474 "max_subsystems": 1024 00:05:17.474 } 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "method": "nvmf_set_crdt", 00:05:17.474 "params": { 00:05:17.474 "crdt1": 0, 00:05:17.474 "crdt2": 0, 00:05:17.474 "crdt3": 0 00:05:17.474 } 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "method": "nvmf_create_transport", 00:05:17.474 "params": { 00:05:17.474 "trtype": "TCP", 00:05:17.474 "max_queue_depth": 128, 00:05:17.474 "max_io_qpairs_per_ctrlr": 127, 00:05:17.474 "in_capsule_data_size": 4096, 00:05:17.474 "max_io_size": 131072, 00:05:17.474 "io_unit_size": 131072, 00:05:17.474 "max_aq_depth": 128, 00:05:17.474 "num_shared_buffers": 511, 00:05:17.474 "buf_cache_size": 4294967295, 00:05:17.474 "dif_insert_or_strip": false, 00:05:17.474 "zcopy": false, 00:05:17.474 "c2h_success": true, 00:05:17.474 "sock_priority": 0, 00:05:17.474 "abort_timeout_sec": 1, 00:05:17.474 "ack_timeout": 0, 00:05:17.474 "data_wr_pool_size": 0 00:05:17.474 } 00:05:17.474 } 00:05:17.474 ] 00:05:17.474 }, 00:05:17.474 { 00:05:17.474 "subsystem": "iscsi", 00:05:17.474 "config": [ 00:05:17.474 { 00:05:17.474 "method": "iscsi_set_options", 00:05:17.474 "params": { 00:05:17.474 "node_base": "iqn.2016-06.io.spdk", 00:05:17.474 "max_sessions": 128, 00:05:17.474 "max_connections_per_session": 2, 00:05:17.474 "max_queue_depth": 64, 00:05:17.474 "default_time2wait": 2, 00:05:17.474 "default_time2retain": 20, 00:05:17.474 "first_burst_length": 8192, 00:05:17.474 "immediate_data": true, 00:05:17.474 "allow_duplicated_isid": false, 00:05:17.474 "error_recovery_level": 0, 00:05:17.474 "nop_timeout": 60, 00:05:17.474 "nop_in_interval": 30, 00:05:17.474 "disable_chap": false, 00:05:17.474 "require_chap": false, 00:05:17.474 "mutual_chap": false, 00:05:17.474 "chap_group": 0, 00:05:17.474 "max_large_datain_per_connection": 64, 00:05:17.474 "max_r2t_per_connection": 4, 00:05:17.474 "pdu_pool_size": 36864, 00:05:17.474 "immediate_data_pool_size": 16384, 00:05:17.474 "data_out_pool_size": 2048 00:05:17.474 } 00:05:17.474 } 00:05:17.474 ] 00:05:17.474 } 00:05:17.474 ] 00:05:17.474 } 00:05:17.474 23:09:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:17.474 23:09:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2220523 00:05:17.474 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2220523 ']' 00:05:17.474 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2220523 00:05:17.474 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:17.474 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.474 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2220523 00:05:17.474 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.474 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.474 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2220523' 00:05:17.474 killing process with pid 2220523 00:05:17.474 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2220523 00:05:17.474 23:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2220523 00:05:20.047 23:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2221228 00:05:20.047 23:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:20.047 23:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:25.321 23:09:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2221228 00:05:25.321 23:09:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2221228 ']' 00:05:25.321 23:09:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2221228 00:05:25.321 23:09:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:25.321 23:09:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.321 23:09:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2221228 00:05:25.321 23:09:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.321 23:09:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.321 23:09:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2221228' 00:05:25.321 killing process with pid 2221228 00:05:25.321 23:09:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2221228 00:05:25.321 23:09:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2221228 00:05:27.225 23:09:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:27.225 23:09:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:27.225 00:05:27.225 real 0m11.370s 00:05:27.225 user 0m10.916s 00:05:27.225 sys 0m0.833s 00:05:27.225 23:09:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.225 23:09:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.225 ************************************ 00:05:27.225 END TEST skip_rpc_with_json 00:05:27.225 ************************************ 00:05:27.225 23:09:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:27.225 23:09:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:27.225 23:09:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.225 23:09:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.225 23:09:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.225 ************************************ 00:05:27.225 START TEST skip_rpc_with_delay 00:05:27.225 ************************************ 00:05:27.225 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:27.225 23:09:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:27.225 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:27.225 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:27.226 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.226 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.226 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.226 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.226 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.226 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.226 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.226 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:27.226 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:27.485 [2024-07-10 23:09:36.371772] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:27.485 [2024-07-10 23:09:36.371867] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:27.485 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:27.485 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.485 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:27.485 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.485 00:05:27.485 real 0m0.138s 00:05:27.485 user 0m0.080s 00:05:27.485 sys 0m0.057s 00:05:27.485 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.485 23:09:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:27.485 ************************************ 00:05:27.485 END TEST skip_rpc_with_delay 00:05:27.485 ************************************ 00:05:27.485 23:09:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:27.485 23:09:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:27.485 23:09:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:27.485 23:09:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:27.485 23:09:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.485 23:09:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.485 23:09:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.485 ************************************ 00:05:27.485 START TEST exit_on_failed_rpc_init 00:05:27.485 ************************************ 00:05:27.485 23:09:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:27.485 23:09:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2222654 00:05:27.485 23:09:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2222654 00:05:27.485 23:09:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.485 23:09:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2222654 ']' 00:05:27.485 23:09:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.485 23:09:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.485 23:09:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.485 23:09:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.485 23:09:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.744 [2024-07-10 23:09:36.568192] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:27.744 [2024-07-10 23:09:36.568281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222654 ] 00:05:27.744 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.744 [2024-07-10 23:09:36.670918] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.003 [2024-07-10 23:09:36.881062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.939 23:09:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.939 23:09:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:28.939 23:09:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.939 23:09:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.939 23:09:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:28.939 23:09:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.939 23:09:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.939 23:09:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.939 23:09:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.939 23:09:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.939 23:09:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.939 23:09:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.939 23:09:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.939 23:09:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:28.939 23:09:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.939 [2024-07-10 23:09:37.877136] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:28.939 [2024-07-10 23:09:37.877234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222887 ] 00:05:28.939 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.939 [2024-07-10 23:09:37.978578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.197 [2024-07-10 23:09:38.195909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.197 [2024-07-10 23:09:38.196005] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:29.197 [2024-07-10 23:09:38.196021] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:29.197 [2024-07-10 23:09:38.196031] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:29.764 23:09:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:29.764 23:09:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.764 23:09:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:29.765 23:09:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:29.765 23:09:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:29.765 23:09:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.765 23:09:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:29.765 23:09:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2222654 00:05:29.765 23:09:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2222654 ']' 00:05:29.765 23:09:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2222654 00:05:29.765 23:09:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:29.765 23:09:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.765 23:09:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2222654 00:05:29.765 23:09:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.765 23:09:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.765 23:09:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2222654' 00:05:29.765 killing process with pid 2222654 00:05:29.765 23:09:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2222654 00:05:29.765 23:09:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2222654 00:05:32.301 00:05:32.301 real 0m4.605s 00:05:32.301 user 0m5.252s 00:05:32.301 sys 0m0.557s 00:05:32.301 23:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.301 23:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:32.301 ************************************ 00:05:32.301 END TEST exit_on_failed_rpc_init 00:05:32.301 ************************************ 00:05:32.301 23:09:41 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:32.301 23:09:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:32.301 00:05:32.301 real 0m23.928s 00:05:32.301 user 0m23.501s 00:05:32.301 sys 0m2.040s 00:05:32.301 23:09:41 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.301 23:09:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.301 ************************************ 00:05:32.301 END TEST skip_rpc 00:05:32.301 ************************************ 00:05:32.301 23:09:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:32.301 23:09:41 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:32.301 23:09:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.301 23:09:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.301 23:09:41 -- common/autotest_common.sh@10 -- # set +x 00:05:32.301 ************************************ 00:05:32.301 START TEST rpc_client 00:05:32.301 ************************************ 00:05:32.301 23:09:41 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:32.301 * Looking for test storage... 00:05:32.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:32.301 23:09:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:32.301 OK 00:05:32.301 23:09:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:32.301 00:05:32.301 real 0m0.134s 00:05:32.301 user 0m0.057s 00:05:32.301 sys 0m0.085s 00:05:32.301 23:09:41 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.301 23:09:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:32.301 ************************************ 00:05:32.301 END TEST rpc_client 00:05:32.301 ************************************ 00:05:32.301 23:09:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:32.301 23:09:41 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:32.301 23:09:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.301 23:09:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.301 23:09:41 -- common/autotest_common.sh@10 -- # set +x 00:05:32.561 ************************************ 00:05:32.561 START TEST json_config 00:05:32.561 ************************************ 00:05:32.561 23:09:41 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:32.561 23:09:41 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.561 23:09:41 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.561 23:09:41 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.561 23:09:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.561 23:09:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.561 23:09:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.561 23:09:41 json_config -- paths/export.sh@5 -- # export PATH 00:05:32.561 23:09:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@47 -- # : 0 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:32.561 23:09:41 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:32.561 INFO: JSON configuration test init 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:32.561 23:09:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.561 23:09:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:32.561 23:09:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.561 23:09:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.561 23:09:41 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:32.561 23:09:41 json_config -- json_config/common.sh@9 -- # local app=target 00:05:32.561 23:09:41 json_config -- json_config/common.sh@10 -- # shift 00:05:32.561 23:09:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:32.561 23:09:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:32.561 23:09:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:32.561 23:09:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.561 23:09:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.561 23:09:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2223480 00:05:32.561 23:09:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:32.562 Waiting for target to run... 00:05:32.562 23:09:41 json_config -- json_config/common.sh@25 -- # waitforlisten 2223480 /var/tmp/spdk_tgt.sock 00:05:32.562 23:09:41 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:32.562 23:09:41 json_config -- common/autotest_common.sh@829 -- # '[' -z 2223480 ']' 00:05:32.562 23:09:41 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:32.562 23:09:41 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.562 23:09:41 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:32.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:32.562 23:09:41 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.562 23:09:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.562 [2024-07-10 23:09:41.562331] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:32.562 [2024-07-10 23:09:41.562434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223480 ] 00:05:32.562 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.130 [2024-07-10 23:09:42.042339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.390 [2024-07-10 23:09:42.259260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.390 23:09:42 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.390 23:09:42 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:33.390 23:09:42 json_config -- json_config/common.sh@26 -- # echo '' 00:05:33.390 00:05:33.390 23:09:42 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:33.390 23:09:42 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:33.390 23:09:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.390 23:09:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.390 23:09:42 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:33.390 23:09:42 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:33.390 23:09:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.390 23:09:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.390 23:09:42 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:33.390 23:09:42 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:33.390 23:09:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:37.582 23:09:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:37.582 23:09:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:37.582 23:09:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:37.582 23:09:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:37.582 23:09:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:37.582 23:09:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:37.582 23:09:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:37.582 23:09:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:37.582 MallocForNvmf0 00:05:37.582 23:09:46 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.582 23:09:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.841 MallocForNvmf1 00:05:37.841 23:09:46 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.841 23:09:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.841 [2024-07-10 23:09:46.874952] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.841 23:09:46 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:37.841 23:09:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:38.100 23:09:47 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:38.100 23:09:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:38.359 23:09:47 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.359 23:09:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.359 23:09:47 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.360 23:09:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.618 [2024-07-10 23:09:47.557193] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.618 23:09:47 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:38.618 23:09:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.618 23:09:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.618 23:09:47 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:38.618 23:09:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.618 23:09:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.618 23:09:47 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:38.618 23:09:47 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.618 23:09:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.876 MallocBdevForConfigChangeCheck 00:05:38.876 23:09:47 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:38.876 23:09:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.876 23:09:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.876 23:09:47 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:38.876 23:09:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.134 23:09:48 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:39.134 INFO: shutting down applications... 00:05:39.134 23:09:48 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:39.134 23:09:48 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:39.134 23:09:48 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:39.134 23:09:48 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:41.038 Calling clear_iscsi_subsystem 00:05:41.038 Calling clear_nvmf_subsystem 00:05:41.038 Calling clear_nbd_subsystem 00:05:41.038 Calling clear_ublk_subsystem 00:05:41.038 Calling clear_vhost_blk_subsystem 00:05:41.038 Calling clear_vhost_scsi_subsystem 00:05:41.039 Calling clear_bdev_subsystem 00:05:41.039 23:09:49 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:41.039 23:09:49 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:41.039 23:09:49 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:41.039 23:09:49 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.039 23:09:49 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:41.039 23:09:49 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:41.039 23:09:50 json_config -- json_config/json_config.sh@345 -- # break 00:05:41.039 23:09:50 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:41.039 23:09:50 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:41.039 23:09:50 json_config -- json_config/common.sh@31 -- # local app=target 00:05:41.039 23:09:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:41.039 23:09:50 json_config -- json_config/common.sh@35 -- # [[ -n 2223480 ]] 00:05:41.039 23:09:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2223480 00:05:41.039 23:09:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:41.039 23:09:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.039 23:09:50 json_config -- json_config/common.sh@41 -- # kill -0 2223480 00:05:41.039 23:09:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:41.606 23:09:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:41.606 23:09:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.606 23:09:50 json_config -- json_config/common.sh@41 -- # kill -0 2223480 00:05:41.606 23:09:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:42.172 23:09:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:42.172 23:09:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.172 23:09:51 json_config -- json_config/common.sh@41 -- # kill -0 2223480 00:05:42.173 23:09:51 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:42.173 23:09:51 json_config -- json_config/common.sh@43 -- # break 00:05:42.173 23:09:51 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:42.173 23:09:51 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:42.173 SPDK target shutdown done 00:05:42.173 23:09:51 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:42.173 INFO: relaunching applications... 00:05:42.173 23:09:51 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.173 23:09:51 json_config -- json_config/common.sh@9 -- # local app=target 00:05:42.173 23:09:51 json_config -- json_config/common.sh@10 -- # shift 00:05:42.173 23:09:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:42.173 23:09:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:42.173 23:09:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:42.173 23:09:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:42.173 23:09:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:42.173 23:09:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2225268 00:05:42.173 23:09:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:42.173 Waiting for target to run... 00:05:42.173 23:09:51 json_config -- json_config/common.sh@25 -- # waitforlisten 2225268 /var/tmp/spdk_tgt.sock 00:05:42.173 23:09:51 json_config -- common/autotest_common.sh@829 -- # '[' -z 2225268 ']' 00:05:42.173 23:09:51 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:42.173 23:09:51 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.173 23:09:51 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:42.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:42.173 23:09:51 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.173 23:09:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.173 23:09:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.173 [2024-07-10 23:09:51.160288] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:42.173 [2024-07-10 23:09:51.160391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225268 ] 00:05:42.173 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.431 [2024-07-10 23:09:51.474893] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.690 [2024-07-10 23:09:51.676362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.922 [2024-07-10 23:09:55.441915] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.922 [2024-07-10 23:09:55.474286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:46.922 23:09:55 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.922 23:09:55 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:46.922 23:09:55 json_config -- json_config/common.sh@26 -- # echo '' 00:05:46.922 00:05:46.922 23:09:55 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:46.922 23:09:55 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:46.922 INFO: Checking if target configuration is the same... 00:05:46.922 23:09:55 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.922 23:09:55 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:46.922 23:09:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:46.922 + '[' 2 -ne 2 ']' 00:05:46.922 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:46.922 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:46.922 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:46.922 +++ basename /dev/fd/62 00:05:46.922 ++ mktemp /tmp/62.XXX 00:05:46.922 + tmp_file_1=/tmp/62.ws5 00:05:46.922 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.922 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:46.922 + tmp_file_2=/tmp/spdk_tgt_config.json.Ykk 00:05:46.922 + ret=0 00:05:46.922 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.922 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.922 + diff -u /tmp/62.ws5 /tmp/spdk_tgt_config.json.Ykk 00:05:46.922 + echo 'INFO: JSON config files are the same' 00:05:46.922 INFO: JSON config files are the same 00:05:46.922 + rm /tmp/62.ws5 /tmp/spdk_tgt_config.json.Ykk 00:05:46.922 + exit 0 00:05:46.922 23:09:55 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:46.922 23:09:55 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:46.922 INFO: changing configuration and checking if this can be detected... 00:05:46.922 23:09:55 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:46.922 23:09:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:47.182 23:09:56 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:47.182 23:09:56 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:47.182 23:09:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:47.182 + '[' 2 -ne 2 ']' 00:05:47.182 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:47.182 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:47.182 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:47.182 +++ basename /dev/fd/62 00:05:47.182 ++ mktemp /tmp/62.XXX 00:05:47.182 + tmp_file_1=/tmp/62.JFJ 00:05:47.182 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:47.182 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:47.182 + tmp_file_2=/tmp/spdk_tgt_config.json.NTy 00:05:47.182 + ret=0 00:05:47.182 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:47.441 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:47.441 + diff -u /tmp/62.JFJ /tmp/spdk_tgt_config.json.NTy 00:05:47.441 + ret=1 00:05:47.441 + echo '=== Start of file: /tmp/62.JFJ ===' 00:05:47.441 + cat /tmp/62.JFJ 00:05:47.441 + echo '=== End of file: /tmp/62.JFJ ===' 00:05:47.441 + echo '' 00:05:47.441 + echo '=== Start of file: /tmp/spdk_tgt_config.json.NTy ===' 00:05:47.441 + cat /tmp/spdk_tgt_config.json.NTy 00:05:47.441 + echo '=== End of file: /tmp/spdk_tgt_config.json.NTy ===' 00:05:47.441 + echo '' 00:05:47.441 + rm /tmp/62.JFJ /tmp/spdk_tgt_config.json.NTy 00:05:47.441 + exit 1 00:05:47.441 23:09:56 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:47.441 INFO: configuration change detected. 00:05:47.441 23:09:56 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:47.441 23:09:56 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:47.441 23:09:56 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.441 23:09:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.441 23:09:56 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:47.441 23:09:56 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:47.441 23:09:56 json_config -- json_config/json_config.sh@317 -- # [[ -n 2225268 ]] 00:05:47.441 23:09:56 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:47.441 23:09:56 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:47.441 23:09:56 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.441 23:09:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.441 23:09:56 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:47.441 23:09:56 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:47.441 23:09:56 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:47.441 23:09:56 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:47.441 23:09:56 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:47.441 23:09:56 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:47.441 23:09:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:47.441 23:09:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.441 23:09:56 json_config -- json_config/json_config.sh@323 -- # killprocess 2225268 00:05:47.441 23:09:56 json_config -- common/autotest_common.sh@948 -- # '[' -z 2225268 ']' 00:05:47.441 23:09:56 json_config -- common/autotest_common.sh@952 -- # kill -0 2225268 00:05:47.441 23:09:56 json_config -- common/autotest_common.sh@953 -- # uname 00:05:47.441 23:09:56 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.441 23:09:56 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2225268 00:05:47.441 23:09:56 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.441 23:09:56 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.441 23:09:56 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2225268' 00:05:47.441 killing process with pid 2225268 00:05:47.441 23:09:56 json_config -- common/autotest_common.sh@967 -- # kill 2225268 00:05:47.441 23:09:56 json_config -- common/autotest_common.sh@972 -- # wait 2225268 00:05:49.975 23:09:58 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:49.975 23:09:58 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:49.975 23:09:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:49.975 23:09:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.975 23:09:58 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:49.975 23:09:58 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:49.975 INFO: Success 00:05:49.975 00:05:49.975 real 0m17.404s 00:05:49.975 user 0m18.123s 00:05:49.975 sys 0m2.153s 00:05:49.975 23:09:58 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.975 23:09:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.975 ************************************ 00:05:49.975 END TEST json_config 00:05:49.975 ************************************ 00:05:49.975 23:09:58 -- common/autotest_common.sh@1142 -- # return 0 00:05:49.975 23:09:58 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:49.975 23:09:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.975 23:09:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.975 23:09:58 -- common/autotest_common.sh@10 -- # set +x 00:05:49.975 ************************************ 00:05:49.975 START TEST json_config_extra_key 00:05:49.975 ************************************ 00:05:49.975 23:09:58 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:49.975 23:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:49.975 23:09:58 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.975 23:09:58 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.975 23:09:58 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.975 23:09:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.975 23:09:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.975 23:09:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.975 23:09:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:49.975 23:09:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:49.975 23:09:58 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:49.975 23:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:49.975 23:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:49.975 23:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:49.975 23:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:49.975 23:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:49.975 23:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:49.975 23:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:49.975 23:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:49.975 23:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:49.975 23:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:49.975 23:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:49.975 INFO: launching applications... 00:05:49.975 23:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:49.975 23:09:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:49.975 23:09:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:49.975 23:09:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:49.975 23:09:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:49.975 23:09:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:49.975 23:09:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.975 23:09:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.975 23:09:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2226724 00:05:49.975 23:09:58 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:49.975 23:09:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:49.975 Waiting for target to run... 00:05:49.975 23:09:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2226724 /var/tmp/spdk_tgt.sock 00:05:49.975 23:09:58 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2226724 ']' 00:05:49.975 23:09:58 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:49.975 23:09:58 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.975 23:09:58 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:49.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:49.975 23:09:58 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.975 23:09:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:49.975 [2024-07-10 23:09:59.014423] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:49.975 [2024-07-10 23:09:59.014535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226724 ] 00:05:50.234 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.494 [2024-07-10 23:09:59.330954] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.494 [2024-07-10 23:09:59.526798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.431 23:10:00 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.431 23:10:00 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:51.431 23:10:00 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:51.431 00:05:51.431 23:10:00 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:51.431 INFO: shutting down applications... 00:05:51.431 23:10:00 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:51.431 23:10:00 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:51.431 23:10:00 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:51.431 23:10:00 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2226724 ]] 00:05:51.431 23:10:00 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2226724 00:05:51.431 23:10:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:51.431 23:10:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.431 23:10:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2226724 00:05:51.431 23:10:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.999 23:10:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.999 23:10:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.999 23:10:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2226724 00:05:51.999 23:10:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.258 23:10:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.258 23:10:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.258 23:10:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2226724 00:05:52.258 23:10:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.825 23:10:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.825 23:10:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.825 23:10:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2226724 00:05:52.825 23:10:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:53.392 23:10:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:53.392 23:10:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.392 23:10:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2226724 00:05:53.392 23:10:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:53.961 23:10:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:53.961 23:10:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.961 23:10:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2226724 00:05:53.961 23:10:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:54.530 23:10:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:54.530 23:10:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.530 23:10:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2226724 00:05:54.530 23:10:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:54.530 23:10:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:54.530 23:10:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:54.530 23:10:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:54.530 SPDK target shutdown done 00:05:54.530 23:10:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:54.530 Success 00:05:54.530 00:05:54.530 real 0m4.474s 00:05:54.530 user 0m4.081s 00:05:54.530 sys 0m0.500s 00:05:54.530 23:10:03 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.530 23:10:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:54.530 ************************************ 00:05:54.530 END TEST json_config_extra_key 00:05:54.530 ************************************ 00:05:54.530 23:10:03 -- common/autotest_common.sh@1142 -- # return 0 00:05:54.530 23:10:03 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.530 23:10:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.530 23:10:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.530 23:10:03 -- common/autotest_common.sh@10 -- # set +x 00:05:54.530 ************************************ 00:05:54.530 START TEST alias_rpc 00:05:54.530 ************************************ 00:05:54.530 23:10:03 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.530 * Looking for test storage... 00:05:54.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:54.530 23:10:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:54.530 23:10:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2227621 00:05:54.530 23:10:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2227621 00:05:54.530 23:10:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.530 23:10:03 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2227621 ']' 00:05:54.530 23:10:03 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.530 23:10:03 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.530 23:10:03 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.530 23:10:03 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.530 23:10:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.530 [2024-07-10 23:10:03.556934] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:54.530 [2024-07-10 23:10:03.557050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227621 ] 00:05:54.788 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.788 [2024-07-10 23:10:03.657676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.046 [2024-07-10 23:10:03.864052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.982 23:10:04 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.982 23:10:04 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:55.983 23:10:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:55.983 23:10:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2227621 00:05:55.983 23:10:04 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2227621 ']' 00:05:55.983 23:10:04 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2227621 00:05:55.983 23:10:04 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:55.983 23:10:04 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.983 23:10:04 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2227621 00:05:55.983 23:10:04 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.983 23:10:04 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.983 23:10:04 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2227621' 00:05:55.983 killing process with pid 2227621 00:05:55.983 23:10:04 alias_rpc -- common/autotest_common.sh@967 -- # kill 2227621 00:05:55.983 23:10:04 alias_rpc -- common/autotest_common.sh@972 -- # wait 2227621 00:05:58.517 00:05:58.517 real 0m4.010s 00:05:58.517 user 0m4.039s 00:05:58.517 sys 0m0.491s 00:05:58.517 23:10:07 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.517 23:10:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.517 ************************************ 00:05:58.517 END TEST alias_rpc 00:05:58.517 ************************************ 00:05:58.517 23:10:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.517 23:10:07 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:58.517 23:10:07 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:58.517 23:10:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.517 23:10:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.517 23:10:07 -- common/autotest_common.sh@10 -- # set +x 00:05:58.517 ************************************ 00:05:58.517 START TEST spdkcli_tcp 00:05:58.517 ************************************ 00:05:58.517 23:10:07 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:58.517 * Looking for test storage... 00:05:58.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:58.517 23:10:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:58.517 23:10:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:58.517 23:10:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:58.517 23:10:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:58.517 23:10:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:58.517 23:10:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:58.517 23:10:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:58.517 23:10:07 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.517 23:10:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.517 23:10:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2228376 00:05:58.517 23:10:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2228376 00:05:58.517 23:10:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:58.517 23:10:07 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2228376 ']' 00:05:58.517 23:10:07 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.517 23:10:07 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.517 23:10:07 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.517 23:10:07 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.517 23:10:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.777 [2024-07-10 23:10:07.626983] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:58.777 [2024-07-10 23:10:07.627102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228376 ] 00:05:58.777 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.777 [2024-07-10 23:10:07.728224] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.037 [2024-07-10 23:10:07.941329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.037 [2024-07-10 23:10:07.941340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.974 23:10:08 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.974 23:10:08 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:59.974 23:10:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2228590 00:05:59.974 23:10:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:59.974 23:10:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:00.233 [ 00:06:00.233 "bdev_malloc_delete", 00:06:00.233 "bdev_malloc_create", 00:06:00.233 "bdev_null_resize", 00:06:00.233 "bdev_null_delete", 00:06:00.233 "bdev_null_create", 00:06:00.233 "bdev_nvme_cuse_unregister", 00:06:00.233 "bdev_nvme_cuse_register", 00:06:00.233 "bdev_opal_new_user", 00:06:00.233 "bdev_opal_set_lock_state", 00:06:00.233 "bdev_opal_delete", 00:06:00.233 "bdev_opal_get_info", 00:06:00.233 "bdev_opal_create", 00:06:00.233 "bdev_nvme_opal_revert", 00:06:00.233 "bdev_nvme_opal_init", 00:06:00.233 "bdev_nvme_send_cmd", 00:06:00.234 "bdev_nvme_get_path_iostat", 00:06:00.234 "bdev_nvme_get_mdns_discovery_info", 00:06:00.234 "bdev_nvme_stop_mdns_discovery", 00:06:00.234 "bdev_nvme_start_mdns_discovery", 00:06:00.234 "bdev_nvme_set_multipath_policy", 00:06:00.234 "bdev_nvme_set_preferred_path", 00:06:00.234 "bdev_nvme_get_io_paths", 00:06:00.234 "bdev_nvme_remove_error_injection", 00:06:00.234 "bdev_nvme_add_error_injection", 00:06:00.234 "bdev_nvme_get_discovery_info", 00:06:00.234 "bdev_nvme_stop_discovery", 00:06:00.234 "bdev_nvme_start_discovery", 00:06:00.234 "bdev_nvme_get_controller_health_info", 00:06:00.234 "bdev_nvme_disable_controller", 00:06:00.234 "bdev_nvme_enable_controller", 00:06:00.234 "bdev_nvme_reset_controller", 00:06:00.234 "bdev_nvme_get_transport_statistics", 00:06:00.234 "bdev_nvme_apply_firmware", 00:06:00.234 "bdev_nvme_detach_controller", 00:06:00.234 "bdev_nvme_get_controllers", 00:06:00.234 "bdev_nvme_attach_controller", 00:06:00.234 "bdev_nvme_set_hotplug", 00:06:00.234 "bdev_nvme_set_options", 00:06:00.234 "bdev_passthru_delete", 00:06:00.234 "bdev_passthru_create", 00:06:00.234 "bdev_lvol_set_parent_bdev", 00:06:00.234 "bdev_lvol_set_parent", 00:06:00.234 "bdev_lvol_check_shallow_copy", 00:06:00.234 "bdev_lvol_start_shallow_copy", 00:06:00.234 "bdev_lvol_grow_lvstore", 00:06:00.234 "bdev_lvol_get_lvols", 00:06:00.234 "bdev_lvol_get_lvstores", 00:06:00.234 "bdev_lvol_delete", 00:06:00.234 "bdev_lvol_set_read_only", 00:06:00.234 "bdev_lvol_resize", 00:06:00.234 "bdev_lvol_decouple_parent", 00:06:00.234 "bdev_lvol_inflate", 00:06:00.234 "bdev_lvol_rename", 00:06:00.234 "bdev_lvol_clone_bdev", 00:06:00.234 "bdev_lvol_clone", 00:06:00.234 "bdev_lvol_snapshot", 00:06:00.234 "bdev_lvol_create", 00:06:00.234 "bdev_lvol_delete_lvstore", 00:06:00.234 "bdev_lvol_rename_lvstore", 00:06:00.234 "bdev_lvol_create_lvstore", 00:06:00.234 "bdev_raid_set_options", 00:06:00.234 "bdev_raid_remove_base_bdev", 00:06:00.234 "bdev_raid_add_base_bdev", 00:06:00.234 "bdev_raid_delete", 00:06:00.234 "bdev_raid_create", 00:06:00.234 "bdev_raid_get_bdevs", 00:06:00.234 "bdev_error_inject_error", 00:06:00.234 "bdev_error_delete", 00:06:00.234 "bdev_error_create", 00:06:00.234 "bdev_split_delete", 00:06:00.234 "bdev_split_create", 00:06:00.234 "bdev_delay_delete", 00:06:00.234 "bdev_delay_create", 00:06:00.234 "bdev_delay_update_latency", 00:06:00.234 "bdev_zone_block_delete", 00:06:00.234 "bdev_zone_block_create", 00:06:00.234 "blobfs_create", 00:06:00.234 "blobfs_detect", 00:06:00.234 "blobfs_set_cache_size", 00:06:00.234 "bdev_aio_delete", 00:06:00.234 "bdev_aio_rescan", 00:06:00.234 "bdev_aio_create", 00:06:00.234 "bdev_ftl_set_property", 00:06:00.234 "bdev_ftl_get_properties", 00:06:00.234 "bdev_ftl_get_stats", 00:06:00.234 "bdev_ftl_unmap", 00:06:00.234 "bdev_ftl_unload", 00:06:00.234 "bdev_ftl_delete", 00:06:00.234 "bdev_ftl_load", 00:06:00.234 "bdev_ftl_create", 00:06:00.234 "bdev_virtio_attach_controller", 00:06:00.234 "bdev_virtio_scsi_get_devices", 00:06:00.234 "bdev_virtio_detach_controller", 00:06:00.234 "bdev_virtio_blk_set_hotplug", 00:06:00.234 "bdev_iscsi_delete", 00:06:00.234 "bdev_iscsi_create", 00:06:00.234 "bdev_iscsi_set_options", 00:06:00.234 "accel_error_inject_error", 00:06:00.234 "ioat_scan_accel_module", 00:06:00.234 "dsa_scan_accel_module", 00:06:00.234 "iaa_scan_accel_module", 00:06:00.234 "keyring_file_remove_key", 00:06:00.234 "keyring_file_add_key", 00:06:00.234 "keyring_linux_set_options", 00:06:00.234 "iscsi_get_histogram", 00:06:00.234 "iscsi_enable_histogram", 00:06:00.234 "iscsi_set_options", 00:06:00.234 "iscsi_get_auth_groups", 00:06:00.234 "iscsi_auth_group_remove_secret", 00:06:00.234 "iscsi_auth_group_add_secret", 00:06:00.234 "iscsi_delete_auth_group", 00:06:00.234 "iscsi_create_auth_group", 00:06:00.234 "iscsi_set_discovery_auth", 00:06:00.234 "iscsi_get_options", 00:06:00.234 "iscsi_target_node_request_logout", 00:06:00.234 "iscsi_target_node_set_redirect", 00:06:00.234 "iscsi_target_node_set_auth", 00:06:00.234 "iscsi_target_node_add_lun", 00:06:00.234 "iscsi_get_stats", 00:06:00.234 "iscsi_get_connections", 00:06:00.234 "iscsi_portal_group_set_auth", 00:06:00.234 "iscsi_start_portal_group", 00:06:00.234 "iscsi_delete_portal_group", 00:06:00.234 "iscsi_create_portal_group", 00:06:00.234 "iscsi_get_portal_groups", 00:06:00.234 "iscsi_delete_target_node", 00:06:00.234 "iscsi_target_node_remove_pg_ig_maps", 00:06:00.234 "iscsi_target_node_add_pg_ig_maps", 00:06:00.234 "iscsi_create_target_node", 00:06:00.234 "iscsi_get_target_nodes", 00:06:00.234 "iscsi_delete_initiator_group", 00:06:00.234 "iscsi_initiator_group_remove_initiators", 00:06:00.234 "iscsi_initiator_group_add_initiators", 00:06:00.234 "iscsi_create_initiator_group", 00:06:00.234 "iscsi_get_initiator_groups", 00:06:00.234 "nvmf_set_crdt", 00:06:00.234 "nvmf_set_config", 00:06:00.234 "nvmf_set_max_subsystems", 00:06:00.234 "nvmf_stop_mdns_prr", 00:06:00.234 "nvmf_publish_mdns_prr", 00:06:00.234 "nvmf_subsystem_get_listeners", 00:06:00.234 "nvmf_subsystem_get_qpairs", 00:06:00.234 "nvmf_subsystem_get_controllers", 00:06:00.234 "nvmf_get_stats", 00:06:00.234 "nvmf_get_transports", 00:06:00.234 "nvmf_create_transport", 00:06:00.234 "nvmf_get_targets", 00:06:00.234 "nvmf_delete_target", 00:06:00.234 "nvmf_create_target", 00:06:00.234 "nvmf_subsystem_allow_any_host", 00:06:00.234 "nvmf_subsystem_remove_host", 00:06:00.234 "nvmf_subsystem_add_host", 00:06:00.234 "nvmf_ns_remove_host", 00:06:00.234 "nvmf_ns_add_host", 00:06:00.234 "nvmf_subsystem_remove_ns", 00:06:00.234 "nvmf_subsystem_add_ns", 00:06:00.234 "nvmf_subsystem_listener_set_ana_state", 00:06:00.234 "nvmf_discovery_get_referrals", 00:06:00.234 "nvmf_discovery_remove_referral", 00:06:00.234 "nvmf_discovery_add_referral", 00:06:00.234 "nvmf_subsystem_remove_listener", 00:06:00.234 "nvmf_subsystem_add_listener", 00:06:00.234 "nvmf_delete_subsystem", 00:06:00.234 "nvmf_create_subsystem", 00:06:00.234 "nvmf_get_subsystems", 00:06:00.234 "env_dpdk_get_mem_stats", 00:06:00.234 "nbd_get_disks", 00:06:00.234 "nbd_stop_disk", 00:06:00.234 "nbd_start_disk", 00:06:00.234 "ublk_recover_disk", 00:06:00.234 "ublk_get_disks", 00:06:00.234 "ublk_stop_disk", 00:06:00.234 "ublk_start_disk", 00:06:00.234 "ublk_destroy_target", 00:06:00.234 "ublk_create_target", 00:06:00.234 "virtio_blk_create_transport", 00:06:00.234 "virtio_blk_get_transports", 00:06:00.234 "vhost_controller_set_coalescing", 00:06:00.234 "vhost_get_controllers", 00:06:00.234 "vhost_delete_controller", 00:06:00.234 "vhost_create_blk_controller", 00:06:00.234 "vhost_scsi_controller_remove_target", 00:06:00.234 "vhost_scsi_controller_add_target", 00:06:00.234 "vhost_start_scsi_controller", 00:06:00.234 "vhost_create_scsi_controller", 00:06:00.234 "thread_set_cpumask", 00:06:00.234 "framework_get_governor", 00:06:00.234 "framework_get_scheduler", 00:06:00.234 "framework_set_scheduler", 00:06:00.234 "framework_get_reactors", 00:06:00.234 "thread_get_io_channels", 00:06:00.234 "thread_get_pollers", 00:06:00.234 "thread_get_stats", 00:06:00.234 "framework_monitor_context_switch", 00:06:00.234 "spdk_kill_instance", 00:06:00.234 "log_enable_timestamps", 00:06:00.234 "log_get_flags", 00:06:00.234 "log_clear_flag", 00:06:00.234 "log_set_flag", 00:06:00.234 "log_get_level", 00:06:00.234 "log_set_level", 00:06:00.234 "log_get_print_level", 00:06:00.234 "log_set_print_level", 00:06:00.234 "framework_enable_cpumask_locks", 00:06:00.234 "framework_disable_cpumask_locks", 00:06:00.234 "framework_wait_init", 00:06:00.234 "framework_start_init", 00:06:00.234 "scsi_get_devices", 00:06:00.234 "bdev_get_histogram", 00:06:00.234 "bdev_enable_histogram", 00:06:00.234 "bdev_set_qos_limit", 00:06:00.234 "bdev_set_qd_sampling_period", 00:06:00.234 "bdev_get_bdevs", 00:06:00.234 "bdev_reset_iostat", 00:06:00.234 "bdev_get_iostat", 00:06:00.234 "bdev_examine", 00:06:00.234 "bdev_wait_for_examine", 00:06:00.234 "bdev_set_options", 00:06:00.234 "notify_get_notifications", 00:06:00.234 "notify_get_types", 00:06:00.234 "accel_get_stats", 00:06:00.234 "accel_set_options", 00:06:00.234 "accel_set_driver", 00:06:00.234 "accel_crypto_key_destroy", 00:06:00.234 "accel_crypto_keys_get", 00:06:00.234 "accel_crypto_key_create", 00:06:00.234 "accel_assign_opc", 00:06:00.234 "accel_get_module_info", 00:06:00.234 "accel_get_opc_assignments", 00:06:00.234 "vmd_rescan", 00:06:00.234 "vmd_remove_device", 00:06:00.234 "vmd_enable", 00:06:00.234 "sock_get_default_impl", 00:06:00.234 "sock_set_default_impl", 00:06:00.234 "sock_impl_set_options", 00:06:00.234 "sock_impl_get_options", 00:06:00.234 "iobuf_get_stats", 00:06:00.234 "iobuf_set_options", 00:06:00.234 "framework_get_pci_devices", 00:06:00.234 "framework_get_config", 00:06:00.234 "framework_get_subsystems", 00:06:00.234 "trace_get_info", 00:06:00.234 "trace_get_tpoint_group_mask", 00:06:00.234 "trace_disable_tpoint_group", 00:06:00.234 "trace_enable_tpoint_group", 00:06:00.234 "trace_clear_tpoint_mask", 00:06:00.234 "trace_set_tpoint_mask", 00:06:00.234 "keyring_get_keys", 00:06:00.234 "spdk_get_version", 00:06:00.234 "rpc_get_methods" 00:06:00.234 ] 00:06:00.234 23:10:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:00.234 23:10:09 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.234 23:10:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.234 23:10:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:00.234 23:10:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2228376 00:06:00.234 23:10:09 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2228376 ']' 00:06:00.234 23:10:09 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2228376 00:06:00.234 23:10:09 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:00.234 23:10:09 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.234 23:10:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2228376 00:06:00.234 23:10:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.234 23:10:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.234 23:10:09 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2228376' 00:06:00.234 killing process with pid 2228376 00:06:00.234 23:10:09 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2228376 00:06:00.234 23:10:09 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2228376 00:06:02.762 00:06:02.762 real 0m4.176s 00:06:02.762 user 0m7.496s 00:06:02.762 sys 0m0.530s 00:06:02.762 23:10:11 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.762 23:10:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.762 ************************************ 00:06:02.762 END TEST spdkcli_tcp 00:06:02.762 ************************************ 00:06:02.762 23:10:11 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.762 23:10:11 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:02.762 23:10:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.762 23:10:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.762 23:10:11 -- common/autotest_common.sh@10 -- # set +x 00:06:02.762 ************************************ 00:06:02.762 START TEST dpdk_mem_utility 00:06:02.762 ************************************ 00:06:02.762 23:10:11 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:02.762 * Looking for test storage... 00:06:02.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:02.762 23:10:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:02.762 23:10:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2229188 00:06:02.762 23:10:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.762 23:10:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2229188 00:06:02.762 23:10:11 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2229188 ']' 00:06:02.762 23:10:11 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.762 23:10:11 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.762 23:10:11 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.762 23:10:11 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.762 23:10:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.019 [2024-07-10 23:10:11.837722] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:03.019 [2024-07-10 23:10:11.837820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229188 ] 00:06:03.019 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.019 [2024-07-10 23:10:11.941601] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.276 [2024-07-10 23:10:12.156133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.210 23:10:13 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.210 23:10:13 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:04.210 23:10:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:04.210 23:10:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:04.210 23:10:13 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.210 23:10:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:04.210 { 00:06:04.210 "filename": "/tmp/spdk_mem_dump.txt" 00:06:04.210 } 00:06:04.210 23:10:13 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.210 23:10:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:04.210 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:04.210 1 heaps totaling size 820.000000 MiB 00:06:04.210 size: 820.000000 MiB heap id: 0 00:06:04.210 end heaps---------- 00:06:04.210 8 mempools totaling size 598.116089 MiB 00:06:04.210 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:04.210 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:04.210 size: 84.521057 MiB name: bdev_io_2229188 00:06:04.210 size: 51.011292 MiB name: evtpool_2229188 00:06:04.210 size: 50.003479 MiB name: msgpool_2229188 00:06:04.210 size: 21.763794 MiB name: PDU_Pool 00:06:04.210 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:04.210 size: 0.026123 MiB name: Session_Pool 00:06:04.210 end mempools------- 00:06:04.210 6 memzones totaling size 4.142822 MiB 00:06:04.210 size: 1.000366 MiB name: RG_ring_0_2229188 00:06:04.210 size: 1.000366 MiB name: RG_ring_1_2229188 00:06:04.210 size: 1.000366 MiB name: RG_ring_4_2229188 00:06:04.210 size: 1.000366 MiB name: RG_ring_5_2229188 00:06:04.210 size: 0.125366 MiB name: RG_ring_2_2229188 00:06:04.210 size: 0.015991 MiB name: RG_ring_3_2229188 00:06:04.210 end memzones------- 00:06:04.210 23:10:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:04.210 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:06:04.210 list of free elements. size: 18.514832 MiB 00:06:04.210 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:04.210 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:04.210 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:04.210 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:04.210 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:04.210 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:04.210 element at address: 0x200019600000 with size: 0.999329 MiB 00:06:04.210 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:04.210 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:04.210 element at address: 0x200018e00000 with size: 0.959900 MiB 00:06:04.210 element at address: 0x200019900040 with size: 0.937256 MiB 00:06:04.210 element at address: 0x200000200000 with size: 0.840942 MiB 00:06:04.210 element at address: 0x20001b000000 with size: 0.583191 MiB 00:06:04.210 element at address: 0x200019200000 with size: 0.491150 MiB 00:06:04.210 element at address: 0x200019a00000 with size: 0.485657 MiB 00:06:04.210 element at address: 0x200013800000 with size: 0.470581 MiB 00:06:04.210 element at address: 0x200028400000 with size: 0.411072 MiB 00:06:04.210 element at address: 0x200003a00000 with size: 0.356140 MiB 00:06:04.210 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:06:04.210 list of standard malloc elements. size: 199.220764 MiB 00:06:04.210 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:04.210 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:04.210 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:04.210 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:04.210 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:04.210 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:04.210 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:04.210 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:04.210 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:06:04.210 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:06:04.210 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:06:04.210 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:06:04.210 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:06:04.210 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:04.210 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:04.210 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:04.210 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:04.210 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:04.210 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:04.210 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:04.210 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:06:04.210 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:06:04.210 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:06:04.210 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:06:04.210 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:06:04.210 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:06:04.210 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:04.210 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:04.210 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:04.210 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:04.210 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:06:04.210 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:06:04.210 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:06:04.210 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:06:04.210 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:06:04.210 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:06:04.210 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:06:04.210 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:06:04.210 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:04.210 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:04.210 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:04.210 list of memzone associated elements. size: 602.264404 MiB 00:06:04.210 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:04.210 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:04.210 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:04.210 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:04.210 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:04.210 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2229188_0 00:06:04.210 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:04.210 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2229188_0 00:06:04.210 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:04.210 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2229188_0 00:06:04.210 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:04.210 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:04.210 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:04.210 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:04.210 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:04.210 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2229188 00:06:04.210 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:04.210 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2229188 00:06:04.210 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:04.210 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2229188 00:06:04.210 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:04.210 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:04.210 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:04.210 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:04.210 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:04.210 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:04.210 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:04.210 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:04.210 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:04.210 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2229188 00:06:04.210 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:04.210 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2229188 00:06:04.210 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:04.210 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2229188 00:06:04.210 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:04.210 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2229188 00:06:04.210 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:04.210 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2229188 00:06:04.210 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:06:04.210 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:04.210 element at address: 0x200013878780 with size: 0.500549 MiB 00:06:04.210 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:04.210 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:06:04.210 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:04.210 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:04.210 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2229188 00:06:04.210 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:06:04.210 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:04.210 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:06:04.210 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:04.210 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:04.210 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2229188 00:06:04.210 element at address: 0x20002846f540 with size: 0.002502 MiB 00:06:04.210 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:04.210 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:06:04.210 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2229188 00:06:04.210 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:04.210 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2229188 00:06:04.210 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:06:04.210 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:04.210 23:10:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:04.210 23:10:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2229188 00:06:04.210 23:10:13 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2229188 ']' 00:06:04.211 23:10:13 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2229188 00:06:04.211 23:10:13 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:04.211 23:10:13 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.211 23:10:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2229188 00:06:04.211 23:10:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.211 23:10:13 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.211 23:10:13 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2229188' 00:06:04.211 killing process with pid 2229188 00:06:04.211 23:10:13 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2229188 00:06:04.211 23:10:13 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2229188 00:06:06.767 00:06:06.767 real 0m3.964s 00:06:06.767 user 0m3.928s 00:06:06.767 sys 0m0.499s 00:06:06.767 23:10:15 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.767 23:10:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.767 ************************************ 00:06:06.767 END TEST dpdk_mem_utility 00:06:06.767 ************************************ 00:06:06.767 23:10:15 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.767 23:10:15 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:06.767 23:10:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.767 23:10:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.767 23:10:15 -- common/autotest_common.sh@10 -- # set +x 00:06:06.767 ************************************ 00:06:06.767 START TEST event 00:06:06.767 ************************************ 00:06:06.767 23:10:15 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:06.767 * Looking for test storage... 00:06:06.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:06.767 23:10:15 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:06.767 23:10:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:06.767 23:10:15 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.767 23:10:15 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:06.767 23:10:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.767 23:10:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.767 ************************************ 00:06:06.767 START TEST event_perf 00:06:06.767 ************************************ 00:06:06.767 23:10:15 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:07.026 Running I/O for 1 seconds...[2024-07-10 23:10:15.873467] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:07.026 [2024-07-10 23:10:15.873546] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229934 ] 00:06:07.026 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.026 [2024-07-10 23:10:15.975452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.285 [2024-07-10 23:10:16.188622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.285 [2024-07-10 23:10:16.188696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.285 [2024-07-10 23:10:16.188756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.285 [2024-07-10 23:10:16.188778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.664 Running I/O for 1 seconds... 00:06:08.664 lcore 0: 197097 00:06:08.664 lcore 1: 197096 00:06:08.664 lcore 2: 197096 00:06:08.664 lcore 3: 197096 00:06:08.664 done. 00:06:08.664 00:06:08.664 real 0m1.767s 00:06:08.664 user 0m4.628s 00:06:08.664 sys 0m0.133s 00:06:08.664 23:10:17 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.664 23:10:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.664 ************************************ 00:06:08.664 END TEST event_perf 00:06:08.664 ************************************ 00:06:08.664 23:10:17 event -- common/autotest_common.sh@1142 -- # return 0 00:06:08.664 23:10:17 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:08.664 23:10:17 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:08.664 23:10:17 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.664 23:10:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.664 ************************************ 00:06:08.664 START TEST event_reactor 00:06:08.664 ************************************ 00:06:08.664 23:10:17 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:08.664 [2024-07-10 23:10:17.704286] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:08.664 [2024-07-10 23:10:17.704364] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230200 ] 00:06:08.922 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.922 [2024-07-10 23:10:17.808346] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.180 [2024-07-10 23:10:18.018079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.557 test_start 00:06:10.557 oneshot 00:06:10.557 tick 100 00:06:10.557 tick 100 00:06:10.557 tick 250 00:06:10.557 tick 100 00:06:10.557 tick 100 00:06:10.557 tick 250 00:06:10.557 tick 500 00:06:10.557 tick 100 00:06:10.557 tick 100 00:06:10.557 tick 100 00:06:10.557 tick 250 00:06:10.557 tick 100 00:06:10.557 tick 100 00:06:10.557 test_end 00:06:10.557 00:06:10.557 real 0m1.757s 00:06:10.557 user 0m1.609s 00:06:10.557 sys 0m0.140s 00:06:10.557 23:10:19 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.557 23:10:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:10.557 ************************************ 00:06:10.557 END TEST event_reactor 00:06:10.557 ************************************ 00:06:10.557 23:10:19 event -- common/autotest_common.sh@1142 -- # return 0 00:06:10.557 23:10:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:10.557 23:10:19 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:10.557 23:10:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.557 23:10:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.557 ************************************ 00:06:10.557 START TEST event_reactor_perf 00:06:10.557 ************************************ 00:06:10.558 23:10:19 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:10.558 [2024-07-10 23:10:19.517988] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:10.558 [2024-07-10 23:10:19.518076] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230460 ] 00:06:10.558 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.558 [2024-07-10 23:10:19.618407] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.816 [2024-07-10 23:10:19.836521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.195 test_start 00:06:12.195 test_end 00:06:12.195 Performance: 379234 events per second 00:06:12.195 00:06:12.195 real 0m1.775s 00:06:12.195 user 0m1.641s 00:06:12.195 sys 0m0.126s 00:06:12.195 23:10:21 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.195 23:10:21 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.195 ************************************ 00:06:12.195 END TEST event_reactor_perf 00:06:12.195 ************************************ 00:06:12.454 23:10:21 event -- common/autotest_common.sh@1142 -- # return 0 00:06:12.454 23:10:21 event -- event/event.sh@49 -- # uname -s 00:06:12.454 23:10:21 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:12.454 23:10:21 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:12.454 23:10:21 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.454 23:10:21 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.454 23:10:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.454 ************************************ 00:06:12.454 START TEST event_scheduler 00:06:12.454 ************************************ 00:06:12.454 23:10:21 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:12.454 * Looking for test storage... 00:06:12.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:12.454 23:10:21 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:12.454 23:10:21 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2230956 00:06:12.454 23:10:21 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:12.454 23:10:21 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.454 23:10:21 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2230956 00:06:12.454 23:10:21 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2230956 ']' 00:06:12.454 23:10:21 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.454 23:10:21 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.454 23:10:21 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.454 23:10:21 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.454 23:10:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.454 [2024-07-10 23:10:21.470674] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:12.454 [2024-07-10 23:10:21.470766] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230956 ] 00:06:12.713 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.713 [2024-07-10 23:10:21.571458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.972 [2024-07-10 23:10:21.787173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.972 [2024-07-10 23:10:21.787233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.972 [2024-07-10 23:10:21.787269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.972 [2024-07-10 23:10:21.787278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.232 23:10:22 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.232 23:10:22 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:13.232 23:10:22 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:13.232 23:10:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.232 23:10:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.232 [2024-07-10 23:10:22.261373] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:13.232 [2024-07-10 23:10:22.261404] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:13.232 [2024-07-10 23:10:22.261423] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:13.232 [2024-07-10 23:10:22.261434] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:13.232 [2024-07-10 23:10:22.261447] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:13.232 23:10:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.232 23:10:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:13.232 23:10:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.232 23:10:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.801 [2024-07-10 23:10:22.616579] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:13.801 23:10:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.801 23:10:22 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:13.801 23:10:22 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.801 23:10:22 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.801 23:10:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.801 ************************************ 00:06:13.801 START TEST scheduler_create_thread 00:06:13.801 ************************************ 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.801 2 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.801 3 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.801 4 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.801 5 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.801 6 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.801 7 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.801 8 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.801 9 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.801 10 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.801 23:10:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.176 23:10:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.176 23:10:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:15.176 23:10:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:15.176 23:10:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.176 23:10:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.554 23:10:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.554 00:06:16.554 real 0m2.627s 00:06:16.554 user 0m0.023s 00:06:16.554 sys 0m0.005s 00:06:16.554 23:10:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.554 23:10:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.554 ************************************ 00:06:16.554 END TEST scheduler_create_thread 00:06:16.554 ************************************ 00:06:16.554 23:10:25 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:16.554 23:10:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:16.554 23:10:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2230956 00:06:16.554 23:10:25 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2230956 ']' 00:06:16.554 23:10:25 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2230956 00:06:16.554 23:10:25 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:16.554 23:10:25 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.554 23:10:25 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2230956 00:06:16.554 23:10:25 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:16.554 23:10:25 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:16.554 23:10:25 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2230956' 00:06:16.554 killing process with pid 2230956 00:06:16.554 23:10:25 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2230956 00:06:16.554 23:10:25 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2230956 00:06:16.813 [2024-07-10 23:10:25.757577] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:18.192 00:06:18.192 real 0m5.752s 00:06:18.192 user 0m9.693s 00:06:18.192 sys 0m0.429s 00:06:18.192 23:10:27 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.192 23:10:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.192 ************************************ 00:06:18.192 END TEST event_scheduler 00:06:18.192 ************************************ 00:06:18.192 23:10:27 event -- common/autotest_common.sh@1142 -- # return 0 00:06:18.192 23:10:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:18.192 23:10:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:18.192 23:10:27 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.192 23:10:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.192 23:10:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.192 ************************************ 00:06:18.192 START TEST app_repeat 00:06:18.192 ************************************ 00:06:18.192 23:10:27 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:18.192 23:10:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.192 23:10:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.192 23:10:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:18.192 23:10:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.192 23:10:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:18.192 23:10:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:18.192 23:10:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:18.192 23:10:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2231935 00:06:18.192 23:10:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.192 23:10:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2231935' 00:06:18.192 Process app_repeat pid: 2231935 00:06:18.192 23:10:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.192 23:10:27 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:18.192 23:10:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:18.192 spdk_app_start Round 0 00:06:18.192 23:10:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2231935 /var/tmp/spdk-nbd.sock 00:06:18.192 23:10:27 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2231935 ']' 00:06:18.192 23:10:27 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.192 23:10:27 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.192 23:10:27 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.192 23:10:27 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.192 23:10:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.192 [2024-07-10 23:10:27.174783] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:18.192 [2024-07-10 23:10:27.174866] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2231935 ] 00:06:18.192 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.451 [2024-07-10 23:10:27.277600] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.451 [2024-07-10 23:10:27.497625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.451 [2024-07-10 23:10:27.497637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.020 23:10:27 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.020 23:10:27 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:19.020 23:10:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.278 Malloc0 00:06:19.278 23:10:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.538 Malloc1 00:06:19.538 23:10:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.538 23:10:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.538 23:10:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.538 23:10:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.538 23:10:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.538 23:10:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.538 23:10:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.538 23:10:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.538 23:10:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.538 23:10:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.538 23:10:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.538 23:10:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.538 23:10:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:19.538 23:10:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.538 23:10:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.538 23:10:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.797 /dev/nbd0 00:06:19.797 23:10:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.797 23:10:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.797 23:10:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:19.797 23:10:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:19.797 23:10:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:19.797 23:10:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:19.797 23:10:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:19.797 23:10:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:19.797 23:10:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:19.797 23:10:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:19.797 23:10:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.797 1+0 records in 00:06:19.797 1+0 records out 00:06:19.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020127 s, 20.4 MB/s 00:06:19.797 23:10:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.797 23:10:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:19.797 23:10:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.797 23:10:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:19.797 23:10:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:19.798 23:10:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.798 23:10:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.798 23:10:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.798 /dev/nbd1 00:06:19.798 23:10:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.798 23:10:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.798 23:10:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:20.057 23:10:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:20.057 23:10:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:20.057 23:10:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:20.057 23:10:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:20.057 23:10:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:20.057 23:10:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:20.057 23:10:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:20.057 23:10:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.057 1+0 records in 00:06:20.058 1+0 records out 00:06:20.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195702 s, 20.9 MB/s 00:06:20.058 23:10:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.058 23:10:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:20.058 23:10:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.058 23:10:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:20.058 23:10:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:20.058 23:10:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.058 23:10:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.058 23:10:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.058 23:10:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.058 23:10:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.058 { 00:06:20.058 "nbd_device": "/dev/nbd0", 00:06:20.058 "bdev_name": "Malloc0" 00:06:20.058 }, 00:06:20.058 { 00:06:20.058 "nbd_device": "/dev/nbd1", 00:06:20.058 "bdev_name": "Malloc1" 00:06:20.058 } 00:06:20.058 ]' 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.058 { 00:06:20.058 "nbd_device": "/dev/nbd0", 00:06:20.058 "bdev_name": "Malloc0" 00:06:20.058 }, 00:06:20.058 { 00:06:20.058 "nbd_device": "/dev/nbd1", 00:06:20.058 "bdev_name": "Malloc1" 00:06:20.058 } 00:06:20.058 ]' 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.058 /dev/nbd1' 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.058 /dev/nbd1' 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.058 256+0 records in 00:06:20.058 256+0 records out 00:06:20.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103461 s, 101 MB/s 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.058 23:10:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.317 256+0 records in 00:06:20.317 256+0 records out 00:06:20.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164887 s, 63.6 MB/s 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.317 256+0 records in 00:06:20.317 256+0 records out 00:06:20.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0186532 s, 56.2 MB/s 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.317 23:10:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.576 23:10:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.576 23:10:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.576 23:10:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.576 23:10:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.576 23:10:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.576 23:10:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.576 23:10:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.576 23:10:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.576 23:10:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.576 23:10:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.576 23:10:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.834 23:10:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.834 23:10:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.834 23:10:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.834 23:10:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.834 23:10:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.834 23:10:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.834 23:10:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.834 23:10:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.834 23:10:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.834 23:10:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.834 23:10:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.834 23:10:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.834 23:10:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.401 23:10:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.780 [2024-07-10 23:10:31.587457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.780 [2024-07-10 23:10:31.788636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.780 [2024-07-10 23:10:31.788636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.039 [2024-07-10 23:10:32.023409] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.039 [2024-07-10 23:10:32.023463] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.416 23:10:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.416 23:10:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:24.416 spdk_app_start Round 1 00:06:24.416 23:10:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2231935 /var/tmp/spdk-nbd.sock 00:06:24.416 23:10:33 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2231935 ']' 00:06:24.416 23:10:33 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.416 23:10:33 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.416 23:10:33 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.416 23:10:33 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.416 23:10:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.416 23:10:33 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.416 23:10:33 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:24.416 23:10:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.675 Malloc0 00:06:24.675 23:10:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.936 Malloc1 00:06:24.936 23:10:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.936 23:10:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.936 23:10:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.936 23:10:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.936 23:10:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.936 23:10:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.936 23:10:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.936 23:10:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.936 23:10:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.936 23:10:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.936 23:10:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.936 23:10:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.936 23:10:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.936 23:10:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.936 23:10:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.936 23:10:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.936 /dev/nbd0 00:06:25.226 23:10:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.226 23:10:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.226 1+0 records in 00:06:25.226 1+0 records out 00:06:25.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180514 s, 22.7 MB/s 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:25.226 23:10:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.226 23:10:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.226 23:10:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:25.226 /dev/nbd1 00:06:25.226 23:10:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:25.226 23:10:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.226 1+0 records in 00:06:25.226 1+0 records out 00:06:25.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224151 s, 18.3 MB/s 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:25.226 23:10:34 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.227 23:10:34 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:25.227 23:10:34 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:25.227 23:10:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.227 23:10:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.227 23:10:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.227 23:10:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.227 23:10:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.485 23:10:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.485 { 00:06:25.485 "nbd_device": "/dev/nbd0", 00:06:25.485 "bdev_name": "Malloc0" 00:06:25.485 }, 00:06:25.485 { 00:06:25.485 "nbd_device": "/dev/nbd1", 00:06:25.485 "bdev_name": "Malloc1" 00:06:25.485 } 00:06:25.485 ]' 00:06:25.485 23:10:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.485 { 00:06:25.486 "nbd_device": "/dev/nbd0", 00:06:25.486 "bdev_name": "Malloc0" 00:06:25.486 }, 00:06:25.486 { 00:06:25.486 "nbd_device": "/dev/nbd1", 00:06:25.486 "bdev_name": "Malloc1" 00:06:25.486 } 00:06:25.486 ]' 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.486 /dev/nbd1' 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.486 /dev/nbd1' 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.486 256+0 records in 00:06:25.486 256+0 records out 00:06:25.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00414271 s, 253 MB/s 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.486 256+0 records in 00:06:25.486 256+0 records out 00:06:25.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015514 s, 67.6 MB/s 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.486 256+0 records in 00:06:25.486 256+0 records out 00:06:25.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198113 s, 52.9 MB/s 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.486 23:10:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.745 23:10:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.745 23:10:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.745 23:10:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.745 23:10:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.745 23:10:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.745 23:10:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.745 23:10:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.745 23:10:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.745 23:10:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.745 23:10:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:26.004 23:10:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:26.004 23:10:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:26.004 23:10:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:26.004 23:10:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.004 23:10:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.004 23:10:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:26.004 23:10:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.004 23:10:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.004 23:10:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.004 23:10:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.004 23:10:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.262 23:10:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.262 23:10:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.262 23:10:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.262 23:10:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.262 23:10:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.262 23:10:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.262 23:10:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:26.262 23:10:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.262 23:10:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.262 23:10:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.262 23:10:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.262 23:10:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.262 23:10:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:26.521 23:10:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:27.902 [2024-07-10 23:10:36.938498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.161 [2024-07-10 23:10:37.144959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.161 [2024-07-10 23:10:37.144966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.419 [2024-07-10 23:10:37.381568] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.419 [2024-07-10 23:10:37.381612] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.796 23:10:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:29.796 23:10:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:29.796 spdk_app_start Round 2 00:06:29.796 23:10:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2231935 /var/tmp/spdk-nbd.sock 00:06:29.796 23:10:38 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2231935 ']' 00:06:29.796 23:10:38 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.796 23:10:38 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.796 23:10:38 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.796 23:10:38 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.796 23:10:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.796 23:10:38 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.796 23:10:38 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:29.796 23:10:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.054 Malloc0 00:06:30.054 23:10:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.312 Malloc1 00:06:30.312 23:10:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.312 23:10:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.312 23:10:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.313 23:10:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.313 23:10:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.313 23:10:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.313 23:10:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.313 23:10:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.313 23:10:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.313 23:10:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.313 23:10:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.313 23:10:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.313 23:10:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:30.313 23:10:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.313 23:10:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.313 23:10:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:30.313 /dev/nbd0 00:06:30.313 23:10:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.313 23:10:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.313 23:10:39 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:30.313 23:10:39 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.313 23:10:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.313 23:10:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.313 23:10:39 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:30.313 23:10:39 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.572 1+0 records in 00:06:30.572 1+0 records out 00:06:30.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246032 s, 16.6 MB/s 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.572 23:10:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.572 23:10:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.572 23:10:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.572 /dev/nbd1 00:06:30.572 23:10:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.572 23:10:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.572 1+0 records in 00:06:30.572 1+0 records out 00:06:30.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199693 s, 20.5 MB/s 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.572 23:10:39 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.572 23:10:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.572 23:10:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.572 23:10:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.572 23:10:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.572 23:10:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.831 { 00:06:30.831 "nbd_device": "/dev/nbd0", 00:06:30.831 "bdev_name": "Malloc0" 00:06:30.831 }, 00:06:30.831 { 00:06:30.831 "nbd_device": "/dev/nbd1", 00:06:30.831 "bdev_name": "Malloc1" 00:06:30.831 } 00:06:30.831 ]' 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.831 { 00:06:30.831 "nbd_device": "/dev/nbd0", 00:06:30.831 "bdev_name": "Malloc0" 00:06:30.831 }, 00:06:30.831 { 00:06:30.831 "nbd_device": "/dev/nbd1", 00:06:30.831 "bdev_name": "Malloc1" 00:06:30.831 } 00:06:30.831 ]' 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.831 /dev/nbd1' 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.831 /dev/nbd1' 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.831 256+0 records in 00:06:30.831 256+0 records out 00:06:30.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0098234 s, 107 MB/s 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.831 256+0 records in 00:06:30.831 256+0 records out 00:06:30.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164708 s, 63.7 MB/s 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.831 256+0 records in 00:06:30.831 256+0 records out 00:06:30.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187887 s, 55.8 MB/s 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.831 23:10:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.090 23:10:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.090 23:10:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.090 23:10:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.090 23:10:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.090 23:10:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.090 23:10:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.090 23:10:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.090 23:10:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.090 23:10:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.090 23:10:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.349 23:10:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.349 23:10:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.349 23:10:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.349 23:10:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.349 23:10:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.349 23:10:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.349 23:10:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.349 23:10:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.349 23:10:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.349 23:10:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.349 23:10:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.608 23:10:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.608 23:10:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.608 23:10:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.608 23:10:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.608 23:10:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.608 23:10:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.608 23:10:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.608 23:10:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.608 23:10:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.608 23:10:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.608 23:10:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.608 23:10:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.608 23:10:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.867 23:10:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:33.245 [2024-07-10 23:10:42.306263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.505 [2024-07-10 23:10:42.510089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.505 [2024-07-10 23:10:42.510090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.764 [2024-07-10 23:10:42.746128] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.764 [2024-07-10 23:10:42.746185] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:35.140 23:10:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2231935 /var/tmp/spdk-nbd.sock 00:06:35.140 23:10:43 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2231935 ']' 00:06:35.140 23:10:43 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.140 23:10:43 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.140 23:10:43 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.140 23:10:43 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.140 23:10:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.140 23:10:44 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.140 23:10:44 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:35.140 23:10:44 event.app_repeat -- event/event.sh@39 -- # killprocess 2231935 00:06:35.140 23:10:44 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2231935 ']' 00:06:35.140 23:10:44 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2231935 00:06:35.140 23:10:44 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:35.140 23:10:44 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.140 23:10:44 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2231935 00:06:35.140 23:10:44 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.140 23:10:44 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.140 23:10:44 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2231935' 00:06:35.140 killing process with pid 2231935 00:06:35.140 23:10:44 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2231935 00:06:35.140 23:10:44 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2231935 00:06:36.518 spdk_app_start is called in Round 0. 00:06:36.518 Shutdown signal received, stop current app iteration 00:06:36.518 Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 reinitialization... 00:06:36.518 spdk_app_start is called in Round 1. 00:06:36.518 Shutdown signal received, stop current app iteration 00:06:36.518 Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 reinitialization... 00:06:36.518 spdk_app_start is called in Round 2. 00:06:36.518 Shutdown signal received, stop current app iteration 00:06:36.518 Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 reinitialization... 00:06:36.518 spdk_app_start is called in Round 3. 00:06:36.518 Shutdown signal received, stop current app iteration 00:06:36.518 23:10:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:36.518 23:10:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:36.518 00:06:36.518 real 0m18.214s 00:06:36.518 user 0m36.970s 00:06:36.518 sys 0m2.448s 00:06:36.518 23:10:45 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.518 23:10:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.518 ************************************ 00:06:36.518 END TEST app_repeat 00:06:36.518 ************************************ 00:06:36.518 23:10:45 event -- common/autotest_common.sh@1142 -- # return 0 00:06:36.518 23:10:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:36.518 23:10:45 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:36.518 23:10:45 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.518 23:10:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.518 23:10:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.518 ************************************ 00:06:36.518 START TEST cpu_locks 00:06:36.518 ************************************ 00:06:36.518 23:10:45 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:36.518 * Looking for test storage... 00:06:36.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:36.518 23:10:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:36.518 23:10:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:36.518 23:10:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:36.518 23:10:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:36.518 23:10:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.518 23:10:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.518 23:10:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.518 ************************************ 00:06:36.518 START TEST default_locks 00:06:36.518 ************************************ 00:06:36.518 23:10:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:36.518 23:10:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2235158 00:06:36.518 23:10:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.518 23:10:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2235158 00:06:36.518 23:10:45 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2235158 ']' 00:06:36.518 23:10:45 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.518 23:10:45 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.518 23:10:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.518 23:10:45 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.518 23:10:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.777 [2024-07-10 23:10:45.613409] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:36.777 [2024-07-10 23:10:45.613510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235158 ] 00:06:36.777 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.777 [2024-07-10 23:10:45.719037] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.036 [2024-07-10 23:10:45.933055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.972 23:10:46 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.972 23:10:46 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:37.972 23:10:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2235158 00:06:37.972 23:10:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2235158 00:06:37.972 23:10:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.972 lslocks: write error 00:06:37.972 23:10:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2235158 00:06:37.972 23:10:47 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2235158 ']' 00:06:37.972 23:10:47 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2235158 00:06:37.972 23:10:47 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:37.972 23:10:47 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.972 23:10:47 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2235158 00:06:38.231 23:10:47 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.231 23:10:47 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.231 23:10:47 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2235158' 00:06:38.231 killing process with pid 2235158 00:06:38.231 23:10:47 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2235158 00:06:38.231 23:10:47 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2235158 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2235158 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2235158 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2235158 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2235158 ']' 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2235158) - No such process 00:06:40.777 ERROR: process (pid: 2235158) is no longer running 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:40.777 00:06:40.777 real 0m3.971s 00:06:40.777 user 0m3.926s 00:06:40.777 sys 0m0.560s 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.777 23:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.777 ************************************ 00:06:40.777 END TEST default_locks 00:06:40.777 ************************************ 00:06:40.777 23:10:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:40.777 23:10:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:40.777 23:10:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.777 23:10:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.777 23:10:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.777 ************************************ 00:06:40.777 START TEST default_locks_via_rpc 00:06:40.777 ************************************ 00:06:40.777 23:10:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:40.777 23:10:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2235879 00:06:40.777 23:10:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2235879 00:06:40.777 23:10:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.777 23:10:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2235879 ']' 00:06:40.777 23:10:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.777 23:10:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.777 23:10:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.777 23:10:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.777 23:10:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.777 [2024-07-10 23:10:49.651497] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:40.777 [2024-07-10 23:10:49.651591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235879 ] 00:06:40.777 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.777 [2024-07-10 23:10:49.752834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.036 [2024-07-10 23:10:49.967002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.973 23:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.973 23:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:41.973 23:10:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:41.973 23:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.973 23:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.973 23:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.973 23:10:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:41.973 23:10:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:41.974 23:10:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:41.974 23:10:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:41.974 23:10:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.974 23:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.974 23:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.974 23:10:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.974 23:10:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2235879 00:06:41.974 23:10:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2235879 00:06:41.974 23:10:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.542 23:10:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2235879 00:06:42.542 23:10:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2235879 ']' 00:06:42.542 23:10:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2235879 00:06:42.542 23:10:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:42.542 23:10:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.542 23:10:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2235879 00:06:42.542 23:10:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.542 23:10:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.542 23:10:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2235879' 00:06:42.542 killing process with pid 2235879 00:06:42.542 23:10:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2235879 00:06:42.542 23:10:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2235879 00:06:45.075 00:06:45.075 real 0m4.236s 00:06:45.075 user 0m4.177s 00:06:45.075 sys 0m0.664s 00:06:45.075 23:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.075 23:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.075 ************************************ 00:06:45.075 END TEST default_locks_via_rpc 00:06:45.075 ************************************ 00:06:45.075 23:10:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:45.075 23:10:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:45.075 23:10:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.075 23:10:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.075 23:10:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.075 ************************************ 00:06:45.075 START TEST non_locking_app_on_locked_coremask 00:06:45.075 ************************************ 00:06:45.075 23:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:45.075 23:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2236599 00:06:45.075 23:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2236599 /var/tmp/spdk.sock 00:06:45.075 23:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.075 23:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2236599 ']' 00:06:45.075 23:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.075 23:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.075 23:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.075 23:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.075 23:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.075 [2024-07-10 23:10:53.936404] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:45.075 [2024-07-10 23:10:53.936498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236599 ] 00:06:45.075 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.075 [2024-07-10 23:10:54.041091] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.362 [2024-07-10 23:10:54.252474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.310 23:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.310 23:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:46.310 23:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2236831 00:06:46.310 23:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2236831 /var/tmp/spdk2.sock 00:06:46.310 23:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:46.310 23:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2236831 ']' 00:06:46.310 23:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.310 23:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.310 23:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.310 23:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.310 23:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.310 [2024-07-10 23:10:55.208449] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:46.310 [2024-07-10 23:10:55.208543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236831 ] 00:06:46.310 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.310 [2024-07-10 23:10:55.348328] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.310 [2024-07-10 23:10:55.348382] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.876 [2024-07-10 23:10:55.779884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.775 23:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.775 23:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:48.775 23:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2236599 00:06:48.775 23:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2236599 00:06:48.775 23:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.340 lslocks: write error 00:06:49.340 23:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2236599 00:06:49.340 23:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2236599 ']' 00:06:49.340 23:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2236599 00:06:49.340 23:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:49.340 23:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.340 23:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2236599 00:06:49.340 23:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.340 23:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.340 23:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2236599' 00:06:49.340 killing process with pid 2236599 00:06:49.340 23:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2236599 00:06:49.340 23:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2236599 00:06:54.611 23:11:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2236831 00:06:54.611 23:11:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2236831 ']' 00:06:54.611 23:11:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2236831 00:06:54.611 23:11:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:54.611 23:11:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.611 23:11:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2236831 00:06:54.611 23:11:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.611 23:11:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.611 23:11:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2236831' 00:06:54.611 killing process with pid 2236831 00:06:54.611 23:11:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2236831 00:06:54.611 23:11:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2236831 00:06:57.148 00:06:57.148 real 0m11.808s 00:06:57.148 user 0m11.986s 00:06:57.148 sys 0m1.187s 00:06:57.148 23:11:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.148 23:11:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.148 ************************************ 00:06:57.148 END TEST non_locking_app_on_locked_coremask 00:06:57.148 ************************************ 00:06:57.148 23:11:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:57.148 23:11:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:57.148 23:11:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.148 23:11:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.148 23:11:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.148 ************************************ 00:06:57.148 START TEST locking_app_on_unlocked_coremask 00:06:57.148 ************************************ 00:06:57.148 23:11:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:57.148 23:11:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2238703 00:06:57.148 23:11:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2238703 /var/tmp/spdk.sock 00:06:57.148 23:11:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:57.148 23:11:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2238703 ']' 00:06:57.148 23:11:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.148 23:11:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.148 23:11:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.148 23:11:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.148 23:11:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.148 [2024-07-10 23:11:05.818164] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:57.148 [2024-07-10 23:11:05.818256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2238703 ] 00:06:57.148 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.148 [2024-07-10 23:11:05.919928] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.148 [2024-07-10 23:11:05.919965] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.148 [2024-07-10 23:11:06.125955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.086 23:11:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.086 23:11:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:58.086 23:11:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2238939 00:06:58.086 23:11:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2238939 /var/tmp/spdk2.sock 00:06:58.086 23:11:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:58.086 23:11:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2238939 ']' 00:06:58.086 23:11:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.086 23:11:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.086 23:11:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.086 23:11:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.086 23:11:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.086 [2024-07-10 23:11:07.115764] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:58.086 [2024-07-10 23:11:07.115860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2238939 ] 00:06:58.345 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.346 [2024-07-10 23:11:07.256569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.914 [2024-07-10 23:11:07.682057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.820 23:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.820 23:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:00.820 23:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2238939 00:07:00.820 23:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2238939 00:07:00.820 23:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.079 lslocks: write error 00:07:01.079 23:11:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2238703 00:07:01.079 23:11:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2238703 ']' 00:07:01.079 23:11:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2238703 00:07:01.079 23:11:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:01.079 23:11:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.079 23:11:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2238703 00:07:01.079 23:11:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.079 23:11:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.079 23:11:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2238703' 00:07:01.079 killing process with pid 2238703 00:07:01.079 23:11:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2238703 00:07:01.079 23:11:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2238703 00:07:06.354 23:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2238939 00:07:06.354 23:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2238939 ']' 00:07:06.354 23:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2238939 00:07:06.354 23:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:06.354 23:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.354 23:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2238939 00:07:06.354 23:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.354 23:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.354 23:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2238939' 00:07:06.354 killing process with pid 2238939 00:07:06.354 23:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2238939 00:07:06.354 23:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2238939 00:07:08.889 00:07:08.889 real 0m11.792s 00:07:08.889 user 0m11.966s 00:07:08.889 sys 0m1.185s 00:07:08.889 23:11:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.889 23:11:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.889 ************************************ 00:07:08.889 END TEST locking_app_on_unlocked_coremask 00:07:08.889 ************************************ 00:07:08.889 23:11:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:08.889 23:11:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:08.889 23:11:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.889 23:11:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.889 23:11:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.889 ************************************ 00:07:08.889 START TEST locking_app_on_locked_coremask 00:07:08.889 ************************************ 00:07:08.889 23:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:08.889 23:11:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2240755 00:07:08.889 23:11:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2240755 /var/tmp/spdk.sock 00:07:08.889 23:11:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.889 23:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2240755 ']' 00:07:08.889 23:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.889 23:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.889 23:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.889 23:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.889 23:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.889 [2024-07-10 23:11:17.652075] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:08.890 [2024-07-10 23:11:17.652176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2240755 ] 00:07:08.890 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.890 [2024-07-10 23:11:17.755454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.149 [2024-07-10 23:11:17.962342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.085 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.085 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:10.085 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:10.086 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2240933 00:07:10.086 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2240933 /var/tmp/spdk2.sock 00:07:10.086 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:10.086 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2240933 /var/tmp/spdk2.sock 00:07:10.086 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:10.086 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.086 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:10.086 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.086 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2240933 /var/tmp/spdk2.sock 00:07:10.086 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2240933 ']' 00:07:10.086 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.086 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.086 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.086 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.086 23:11:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.086 [2024-07-10 23:11:18.920897] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:10.086 [2024-07-10 23:11:18.920994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2240933 ] 00:07:10.086 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.086 [2024-07-10 23:11:19.062064] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2240755 has claimed it. 00:07:10.086 [2024-07-10 23:11:19.062122] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:10.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2240933) - No such process 00:07:10.654 ERROR: process (pid: 2240933) is no longer running 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2240755 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2240755 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.654 lslocks: write error 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2240755 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2240755 ']' 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2240755 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2240755 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2240755' 00:07:10.654 killing process with pid 2240755 00:07:10.654 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2240755 00:07:10.913 23:11:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2240755 00:07:13.486 00:07:13.486 real 0m4.631s 00:07:13.486 user 0m4.711s 00:07:13.486 sys 0m0.676s 00:07:13.486 23:11:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.486 23:11:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.486 ************************************ 00:07:13.486 END TEST locking_app_on_locked_coremask 00:07:13.486 ************************************ 00:07:13.486 23:11:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:13.486 23:11:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:13.486 23:11:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.486 23:11:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.486 23:11:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.486 ************************************ 00:07:13.486 START TEST locking_overlapped_coremask 00:07:13.486 ************************************ 00:07:13.486 23:11:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:13.486 23:11:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2241542 00:07:13.486 23:11:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2241542 /var/tmp/spdk.sock 00:07:13.486 23:11:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:13.486 23:11:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2241542 ']' 00:07:13.486 23:11:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.486 23:11:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.486 23:11:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.486 23:11:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.486 23:11:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.486 [2024-07-10 23:11:22.337139] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:13.486 [2024-07-10 23:11:22.337239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2241542 ] 00:07:13.486 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.486 [2024-07-10 23:11:22.440412] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.745 [2024-07-10 23:11:22.650782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.745 [2024-07-10 23:11:22.650849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.745 [2024-07-10 23:11:22.650856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2241774 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2241774 /var/tmp/spdk2.sock 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2241774 /var/tmp/spdk2.sock 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2241774 /var/tmp/spdk2.sock 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2241774 ']' 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.679 23:11:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.679 [2024-07-10 23:11:23.676009] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:14.679 [2024-07-10 23:11:23.676101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2241774 ] 00:07:14.679 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.937 [2024-07-10 23:11:23.815065] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2241542 has claimed it. 00:07:14.937 [2024-07-10 23:11:23.815122] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:15.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2241774) - No such process 00:07:15.505 ERROR: process (pid: 2241774) is no longer running 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2241542 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2241542 ']' 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2241542 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2241542 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2241542' 00:07:15.505 killing process with pid 2241542 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2241542 00:07:15.505 23:11:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2241542 00:07:18.039 00:07:18.039 real 0m4.602s 00:07:18.039 user 0m12.212s 00:07:18.039 sys 0m0.581s 00:07:18.039 23:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.039 23:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.039 ************************************ 00:07:18.039 END TEST locking_overlapped_coremask 00:07:18.039 ************************************ 00:07:18.039 23:11:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:18.039 23:11:26 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:18.039 23:11:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.039 23:11:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.039 23:11:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.039 ************************************ 00:07:18.039 START TEST locking_overlapped_coremask_via_rpc 00:07:18.039 ************************************ 00:07:18.039 23:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:18.039 23:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2242274 00:07:18.039 23:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:18.039 23:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2242274 /var/tmp/spdk.sock 00:07:18.039 23:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2242274 ']' 00:07:18.039 23:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.039 23:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.039 23:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.039 23:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.039 23:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.039 [2024-07-10 23:11:27.004233] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:18.039 [2024-07-10 23:11:27.004319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2242274 ] 00:07:18.039 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.298 [2024-07-10 23:11:27.112888] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.298 [2024-07-10 23:11:27.112938] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.298 [2024-07-10 23:11:27.325691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.298 [2024-07-10 23:11:27.325760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.298 [2024-07-10 23:11:27.325765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.236 23:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.236 23:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:19.236 23:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:19.236 23:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2242511 00:07:19.236 23:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2242511 /var/tmp/spdk2.sock 00:07:19.236 23:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2242511 ']' 00:07:19.236 23:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.236 23:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.236 23:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.236 23:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.236 23:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.494 [2024-07-10 23:11:28.329072] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:19.494 [2024-07-10 23:11:28.329169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2242511 ] 00:07:19.494 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.494 [2024-07-10 23:11:28.474559] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:19.494 [2024-07-10 23:11:28.474616] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.063 [2024-07-10 23:11:28.929963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.063 [2024-07-10 23:11:28.930048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.063 [2024-07-10 23:11:28.930069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.969 [2024-07-10 23:11:30.829279] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2242274 has claimed it. 00:07:21.969 request: 00:07:21.969 { 00:07:21.969 "method": "framework_enable_cpumask_locks", 00:07:21.969 "req_id": 1 00:07:21.969 } 00:07:21.969 Got JSON-RPC error response 00:07:21.969 response: 00:07:21.969 { 00:07:21.969 "code": -32603, 00:07:21.969 "message": "Failed to claim CPU core: 2" 00:07:21.969 } 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2242274 /var/tmp/spdk.sock 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2242274 ']' 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.969 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.970 23:11:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.970 23:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.970 23:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:21.970 23:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2242511 /var/tmp/spdk2.sock 00:07:21.970 23:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2242511 ']' 00:07:21.970 23:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.970 23:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.970 23:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.970 23:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.970 23:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.229 23:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.229 23:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:22.229 23:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:22.229 23:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:22.229 23:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:22.229 23:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:22.229 00:07:22.229 real 0m4.280s 00:07:22.229 user 0m1.019s 00:07:22.229 sys 0m0.192s 00:07:22.229 23:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.229 23:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.229 ************************************ 00:07:22.229 END TEST locking_overlapped_coremask_via_rpc 00:07:22.229 ************************************ 00:07:22.229 23:11:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:22.229 23:11:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:22.229 23:11:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2242274 ]] 00:07:22.229 23:11:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2242274 00:07:22.229 23:11:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2242274 ']' 00:07:22.229 23:11:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2242274 00:07:22.229 23:11:31 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:22.230 23:11:31 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.230 23:11:31 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2242274 00:07:22.230 23:11:31 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:22.230 23:11:31 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:22.230 23:11:31 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2242274' 00:07:22.230 killing process with pid 2242274 00:07:22.230 23:11:31 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2242274 00:07:22.230 23:11:31 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2242274 00:07:25.516 23:11:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2242511 ]] 00:07:25.516 23:11:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2242511 00:07:25.516 23:11:33 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2242511 ']' 00:07:25.516 23:11:33 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2242511 00:07:25.516 23:11:33 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:25.517 23:11:33 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.517 23:11:33 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2242511 00:07:25.517 23:11:33 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:25.517 23:11:33 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:25.517 23:11:33 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2242511' 00:07:25.517 killing process with pid 2242511 00:07:25.517 23:11:33 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2242511 00:07:25.517 23:11:33 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2242511 00:07:28.054 23:11:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:28.054 23:11:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:28.054 23:11:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2242274 ]] 00:07:28.054 23:11:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2242274 00:07:28.054 23:11:36 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2242274 ']' 00:07:28.054 23:11:36 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2242274 00:07:28.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2242274) - No such process 00:07:28.054 23:11:36 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2242274 is not found' 00:07:28.054 Process with pid 2242274 is not found 00:07:28.054 23:11:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2242511 ]] 00:07:28.054 23:11:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2242511 00:07:28.054 23:11:36 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2242511 ']' 00:07:28.054 23:11:36 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2242511 00:07:28.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2242511) - No such process 00:07:28.054 23:11:36 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2242511 is not found' 00:07:28.054 Process with pid 2242511 is not found 00:07:28.054 23:11:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:28.054 00:07:28.054 real 0m51.082s 00:07:28.054 user 1m26.043s 00:07:28.054 sys 0m6.143s 00:07:28.054 23:11:36 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.054 23:11:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.054 ************************************ 00:07:28.054 END TEST cpu_locks 00:07:28.054 ************************************ 00:07:28.054 23:11:36 event -- common/autotest_common.sh@1142 -- # return 0 00:07:28.054 00:07:28.054 real 1m20.824s 00:07:28.054 user 2m20.766s 00:07:28.054 sys 0m9.748s 00:07:28.054 23:11:36 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.054 23:11:36 event -- common/autotest_common.sh@10 -- # set +x 00:07:28.054 ************************************ 00:07:28.054 END TEST event 00:07:28.054 ************************************ 00:07:28.054 23:11:36 -- common/autotest_common.sh@1142 -- # return 0 00:07:28.054 23:11:36 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:28.054 23:11:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.054 23:11:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.054 23:11:36 -- common/autotest_common.sh@10 -- # set +x 00:07:28.054 ************************************ 00:07:28.054 START TEST thread 00:07:28.054 ************************************ 00:07:28.054 23:11:36 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:28.054 * Looking for test storage... 00:07:28.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:28.054 23:11:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:28.054 23:11:36 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:28.054 23:11:36 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.054 23:11:36 thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.054 ************************************ 00:07:28.054 START TEST thread_poller_perf 00:07:28.054 ************************************ 00:07:28.054 23:11:36 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:28.054 [2024-07-10 23:11:36.756893] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:28.054 [2024-07-10 23:11:36.756976] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2244075 ] 00:07:28.054 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.054 [2024-07-10 23:11:36.859050] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.054 [2024-07-10 23:11:37.064844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.054 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:29.433 ====================================== 00:07:29.433 busy:2306676694 (cyc) 00:07:29.433 total_run_count: 399000 00:07:29.433 tsc_hz: 2300000000 (cyc) 00:07:29.433 ====================================== 00:07:29.433 poller_cost: 5781 (cyc), 2513 (nsec) 00:07:29.433 00:07:29.433 real 0m1.758s 00:07:29.433 user 0m1.622s 00:07:29.433 sys 0m0.129s 00:07:29.433 23:11:38 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.433 23:11:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:29.433 ************************************ 00:07:29.433 END TEST thread_poller_perf 00:07:29.433 ************************************ 00:07:29.693 23:11:38 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:29.693 23:11:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:29.693 23:11:38 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:29.693 23:11:38 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.693 23:11:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.693 ************************************ 00:07:29.693 START TEST thread_poller_perf 00:07:29.693 ************************************ 00:07:29.693 23:11:38 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:29.693 [2024-07-10 23:11:38.574543] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:29.693 [2024-07-10 23:11:38.574621] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2244458 ] 00:07:29.693 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.693 [2024-07-10 23:11:38.673328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.952 [2024-07-10 23:11:38.883839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.952 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:31.332 ====================================== 00:07:31.332 busy:2302845000 (cyc) 00:07:31.332 total_run_count: 5213000 00:07:31.332 tsc_hz: 2300000000 (cyc) 00:07:31.332 ====================================== 00:07:31.332 poller_cost: 441 (cyc), 191 (nsec) 00:07:31.332 00:07:31.332 real 0m1.765s 00:07:31.332 user 0m1.631s 00:07:31.332 sys 0m0.127s 00:07:31.332 23:11:40 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.332 23:11:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:31.332 ************************************ 00:07:31.332 END TEST thread_poller_perf 00:07:31.332 ************************************ 00:07:31.332 23:11:40 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:31.332 23:11:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:31.332 00:07:31.332 real 0m3.728s 00:07:31.332 user 0m3.331s 00:07:31.332 sys 0m0.397s 00:07:31.332 23:11:40 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.332 23:11:40 thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.332 ************************************ 00:07:31.332 END TEST thread 00:07:31.332 ************************************ 00:07:31.332 23:11:40 -- common/autotest_common.sh@1142 -- # return 0 00:07:31.332 23:11:40 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:31.332 23:11:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:31.332 23:11:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.332 23:11:40 -- common/autotest_common.sh@10 -- # set +x 00:07:31.332 ************************************ 00:07:31.332 START TEST accel 00:07:31.332 ************************************ 00:07:31.332 23:11:40 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:31.591 * Looking for test storage... 00:07:31.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:31.591 23:11:40 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:31.591 23:11:40 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:31.591 23:11:40 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:31.591 23:11:40 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2244760 00:07:31.591 23:11:40 accel -- accel/accel.sh@63 -- # waitforlisten 2244760 00:07:31.591 23:11:40 accel -- common/autotest_common.sh@829 -- # '[' -z 2244760 ']' 00:07:31.591 23:11:40 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.591 23:11:40 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:31.591 23:11:40 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.591 23:11:40 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:31.591 23:11:40 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.591 23:11:40 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.591 23:11:40 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.591 23:11:40 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.591 23:11:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.591 23:11:40 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.591 23:11:40 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.591 23:11:40 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.591 23:11:40 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:31.591 23:11:40 accel -- accel/accel.sh@41 -- # jq -r . 00:07:31.591 [2024-07-10 23:11:40.562819] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:31.591 [2024-07-10 23:11:40.562914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2244760 ] 00:07:31.591 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.851 [2024-07-10 23:11:40.665879] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.851 [2024-07-10 23:11:40.877573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.790 23:11:41 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.790 23:11:41 accel -- common/autotest_common.sh@862 -- # return 0 00:07:32.790 23:11:41 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:32.790 23:11:41 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:32.790 23:11:41 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:32.790 23:11:41 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:32.790 23:11:41 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:32.790 23:11:41 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:32.790 23:11:41 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.790 23:11:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.790 23:11:41 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:32.790 23:11:41 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.790 23:11:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.790 23:11:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.790 23:11:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.790 23:11:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.790 23:11:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.790 23:11:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.790 23:11:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.790 23:11:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.790 23:11:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.790 23:11:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.790 23:11:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.790 23:11:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.790 23:11:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.790 23:11:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.790 23:11:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.790 23:11:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.790 23:11:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.790 23:11:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.790 23:11:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.790 23:11:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.790 23:11:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.790 23:11:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.790 23:11:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.790 23:11:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.790 23:11:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.790 23:11:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.790 23:11:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.790 23:11:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.790 23:11:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.790 23:11:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.790 23:11:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.790 23:11:41 accel -- accel/accel.sh@75 -- # killprocess 2244760 00:07:32.790 23:11:41 accel -- common/autotest_common.sh@948 -- # '[' -z 2244760 ']' 00:07:32.790 23:11:41 accel -- common/autotest_common.sh@952 -- # kill -0 2244760 00:07:32.790 23:11:41 accel -- common/autotest_common.sh@953 -- # uname 00:07:32.790 23:11:41 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.790 23:11:41 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2244760 00:07:33.049 23:11:41 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:33.049 23:11:41 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:33.049 23:11:41 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2244760' 00:07:33.049 killing process with pid 2244760 00:07:33.049 23:11:41 accel -- common/autotest_common.sh@967 -- # kill 2244760 00:07:33.049 23:11:41 accel -- common/autotest_common.sh@972 -- # wait 2244760 00:07:35.630 23:11:44 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:35.630 23:11:44 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:35.630 23:11:44 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:35.630 23:11:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.630 23:11:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.630 23:11:44 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:35.630 23:11:44 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:35.630 23:11:44 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:35.630 23:11:44 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.630 23:11:44 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.630 23:11:44 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.630 23:11:44 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.630 23:11:44 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.630 23:11:44 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:35.630 23:11:44 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:35.630 23:11:44 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.630 23:11:44 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:35.630 23:11:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.630 23:11:44 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:35.630 23:11:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:35.630 23:11:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.630 23:11:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.630 ************************************ 00:07:35.630 START TEST accel_missing_filename 00:07:35.630 ************************************ 00:07:35.630 23:11:44 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:35.630 23:11:44 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:35.630 23:11:44 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:35.630 23:11:44 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:35.630 23:11:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.630 23:11:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:35.630 23:11:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.630 23:11:44 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:35.630 23:11:44 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:35.630 23:11:44 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:35.630 23:11:44 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.630 23:11:44 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.630 23:11:44 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.630 23:11:44 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.630 23:11:44 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.630 23:11:44 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:35.630 23:11:44 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:35.630 [2024-07-10 23:11:44.548784] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:35.630 [2024-07-10 23:11:44.548866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2245489 ] 00:07:35.630 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.630 [2024-07-10 23:11:44.649834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.888 [2024-07-10 23:11:44.850848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.147 [2024-07-10 23:11:45.089870] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.714 [2024-07-10 23:11:45.631015] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:36.973 A filename is required. 00:07:36.973 23:11:46 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:36.973 23:11:46 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:36.973 23:11:46 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:36.973 23:11:46 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:36.973 23:11:46 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:36.973 23:11:46 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:36.973 00:07:36.973 real 0m1.533s 00:07:36.973 user 0m1.375s 00:07:36.973 sys 0m0.192s 00:07:36.973 23:11:46 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.973 23:11:46 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:36.973 ************************************ 00:07:36.973 END TEST accel_missing_filename 00:07:36.973 ************************************ 00:07:37.232 23:11:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.232 23:11:46 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.232 23:11:46 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:37.232 23:11:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.232 23:11:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.232 ************************************ 00:07:37.232 START TEST accel_compress_verify 00:07:37.232 ************************************ 00:07:37.232 23:11:46 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.232 23:11:46 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:37.232 23:11:46 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.232 23:11:46 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:37.232 23:11:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.232 23:11:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:37.232 23:11:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.232 23:11:46 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.232 23:11:46 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.232 23:11:46 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:37.232 23:11:46 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.232 23:11:46 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.232 23:11:46 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.232 23:11:46 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.232 23:11:46 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.232 23:11:46 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:37.232 23:11:46 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:37.232 [2024-07-10 23:11:46.133986] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:37.232 [2024-07-10 23:11:46.134079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2245738 ] 00:07:37.232 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.232 [2024-07-10 23:11:46.233311] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.491 [2024-07-10 23:11:46.451054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.749 [2024-07-10 23:11:46.697517] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.316 [2024-07-10 23:11:47.240175] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:38.575 00:07:38.575 Compression does not support the verify option, aborting. 00:07:38.835 23:11:47 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:38.835 23:11:47 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:38.835 23:11:47 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:38.835 23:11:47 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:38.836 23:11:47 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:38.836 23:11:47 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:38.836 00:07:38.836 real 0m1.564s 00:07:38.836 user 0m1.404s 00:07:38.836 sys 0m0.191s 00:07:38.836 23:11:47 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.836 23:11:47 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:38.836 ************************************ 00:07:38.836 END TEST accel_compress_verify 00:07:38.836 ************************************ 00:07:38.836 23:11:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.836 23:11:47 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:38.836 23:11:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:38.836 23:11:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.836 23:11:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.836 ************************************ 00:07:38.836 START TEST accel_wrong_workload 00:07:38.836 ************************************ 00:07:38.836 23:11:47 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:38.836 23:11:47 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:38.836 23:11:47 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:38.836 23:11:47 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:38.836 23:11:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.836 23:11:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:38.836 23:11:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.836 23:11:47 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:38.836 23:11:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:38.836 23:11:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:38.836 23:11:47 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.836 23:11:47 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.836 23:11:47 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.836 23:11:47 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.836 23:11:47 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.836 23:11:47 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:38.836 23:11:47 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:38.836 Unsupported workload type: foobar 00:07:38.836 [2024-07-10 23:11:47.748250] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:38.836 accel_perf options: 00:07:38.836 [-h help message] 00:07:38.836 [-q queue depth per core] 00:07:38.836 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:38.836 [-T number of threads per core 00:07:38.836 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:38.836 [-t time in seconds] 00:07:38.836 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:38.836 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:38.836 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:38.836 [-l for compress/decompress workloads, name of uncompressed input file 00:07:38.836 [-S for crc32c workload, use this seed value (default 0) 00:07:38.836 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:38.836 [-f for fill workload, use this BYTE value (default 255) 00:07:38.836 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:38.836 [-y verify result if this switch is on] 00:07:38.836 [-a tasks to allocate per core (default: same value as -q)] 00:07:38.836 Can be used to spread operations across a wider range of memory. 00:07:38.836 23:11:47 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:38.836 23:11:47 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:38.836 23:11:47 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:38.836 23:11:47 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:38.836 00:07:38.836 real 0m0.047s 00:07:38.836 user 0m0.066s 00:07:38.836 sys 0m0.021s 00:07:38.836 23:11:47 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.836 23:11:47 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:38.836 ************************************ 00:07:38.836 END TEST accel_wrong_workload 00:07:38.836 ************************************ 00:07:38.836 23:11:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.836 23:11:47 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:38.836 23:11:47 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:38.836 23:11:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.836 23:11:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.836 ************************************ 00:07:38.836 START TEST accel_negative_buffers 00:07:38.836 ************************************ 00:07:38.836 23:11:47 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:38.836 23:11:47 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:38.836 23:11:47 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:38.836 23:11:47 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:38.836 23:11:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.836 23:11:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:38.836 23:11:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.836 23:11:47 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:38.836 23:11:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:38.836 23:11:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:38.836 23:11:47 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.836 23:11:47 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.836 23:11:47 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.836 23:11:47 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.836 23:11:47 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.836 23:11:47 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:38.836 23:11:47 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:38.836 -x option must be non-negative. 00:07:38.836 [2024-07-10 23:11:47.867882] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:38.836 accel_perf options: 00:07:38.836 [-h help message] 00:07:38.836 [-q queue depth per core] 00:07:38.836 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:38.836 [-T number of threads per core 00:07:38.836 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:38.836 [-t time in seconds] 00:07:38.836 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:38.836 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:38.836 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:38.836 [-l for compress/decompress workloads, name of uncompressed input file 00:07:38.836 [-S for crc32c workload, use this seed value (default 0) 00:07:38.836 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:38.836 [-f for fill workload, use this BYTE value (default 255) 00:07:38.836 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:38.836 [-y verify result if this switch is on] 00:07:38.836 [-a tasks to allocate per core (default: same value as -q)] 00:07:38.836 Can be used to spread operations across a wider range of memory. 00:07:38.836 23:11:47 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:38.836 23:11:47 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:38.836 23:11:47 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:38.836 23:11:47 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:38.836 00:07:38.836 real 0m0.070s 00:07:38.836 user 0m0.078s 00:07:38.836 sys 0m0.033s 00:07:38.836 23:11:47 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.836 23:11:47 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:38.836 ************************************ 00:07:38.836 END TEST accel_negative_buffers 00:07:38.836 ************************************ 00:07:39.096 23:11:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.096 23:11:47 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:39.096 23:11:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:39.096 23:11:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.096 23:11:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.096 ************************************ 00:07:39.096 START TEST accel_crc32c 00:07:39.096 ************************************ 00:07:39.096 23:11:47 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:39.096 23:11:47 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:39.096 23:11:47 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:39.096 23:11:47 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:39.096 23:11:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.096 23:11:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.096 23:11:47 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:39.096 23:11:47 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:39.096 23:11:47 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.096 23:11:47 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.096 23:11:47 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.096 23:11:47 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.096 23:11:47 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.096 23:11:47 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:39.096 23:11:47 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:39.096 [2024-07-10 23:11:47.989643] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:39.096 [2024-07-10 23:11:47.989716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2246251 ] 00:07:39.096 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.096 [2024-07-10 23:11:48.088217] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.355 [2024-07-10 23:11:48.290648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.615 23:11:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:41.521 23:11:50 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.521 00:07:41.521 real 0m2.542s 00:07:41.521 user 0m2.382s 00:07:41.521 sys 0m0.173s 00:07:41.521 23:11:50 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.521 23:11:50 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:41.521 ************************************ 00:07:41.521 END TEST accel_crc32c 00:07:41.521 ************************************ 00:07:41.521 23:11:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.521 23:11:50 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:41.521 23:11:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:41.521 23:11:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.521 23:11:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.521 ************************************ 00:07:41.521 START TEST accel_crc32c_C2 00:07:41.521 ************************************ 00:07:41.521 23:11:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:41.521 23:11:50 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.521 23:11:50 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:41.521 23:11:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:41.521 23:11:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:41.521 23:11:50 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:41.521 23:11:50 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:41.521 23:11:50 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.521 23:11:50 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.521 23:11:50 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.521 23:11:50 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.521 23:11:50 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.521 23:11:50 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.521 23:11:50 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:41.521 23:11:50 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:41.781 [2024-07-10 23:11:50.604130] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:41.781 [2024-07-10 23:11:50.604231] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2246680 ] 00:07:41.781 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.781 [2024-07-10 23:11:50.705587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.041 [2024-07-10 23:11:50.912571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.300 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.301 23:11:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.236 00:07:44.236 real 0m2.554s 00:07:44.236 user 0m2.384s 00:07:44.236 sys 0m0.184s 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.236 23:11:53 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:44.236 ************************************ 00:07:44.236 END TEST accel_crc32c_C2 00:07:44.236 ************************************ 00:07:44.236 23:11:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.236 23:11:53 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:44.236 23:11:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:44.236 23:11:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.236 23:11:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.236 ************************************ 00:07:44.236 START TEST accel_copy 00:07:44.236 ************************************ 00:07:44.236 23:11:53 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:44.236 23:11:53 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:44.236 23:11:53 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:44.236 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.236 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.236 23:11:53 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:44.236 23:11:53 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:44.236 23:11:53 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:44.236 23:11:53 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.236 23:11:53 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.236 23:11:53 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.236 23:11:53 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.236 23:11:53 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.236 23:11:53 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:44.236 23:11:53 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:44.236 [2024-07-10 23:11:53.221190] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:44.236 [2024-07-10 23:11:53.221266] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2247077 ] 00:07:44.236 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.496 [2024-07-10 23:11:53.321577] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.496 [2024-07-10 23:11:53.533914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.753 23:11:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.753 23:11:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.753 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.754 23:11:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:47.290 23:11:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.290 00:07:47.290 real 0m2.573s 00:07:47.290 user 0m2.414s 00:07:47.290 sys 0m0.172s 00:07:47.290 23:11:55 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.291 23:11:55 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:47.291 ************************************ 00:07:47.291 END TEST accel_copy 00:07:47.291 ************************************ 00:07:47.291 23:11:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.291 23:11:55 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:47.291 23:11:55 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:47.291 23:11:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.291 23:11:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.291 ************************************ 00:07:47.291 START TEST accel_fill 00:07:47.291 ************************************ 00:07:47.291 23:11:55 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:47.291 23:11:55 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:47.291 23:11:55 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:47.291 23:11:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.291 23:11:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.291 23:11:55 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:47.291 23:11:55 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:47.291 23:11:55 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:47.291 23:11:55 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.291 23:11:55 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.291 23:11:55 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.291 23:11:55 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.291 23:11:55 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.291 23:11:55 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:47.291 23:11:55 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:47.291 [2024-07-10 23:11:55.856955] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:47.291 [2024-07-10 23:11:55.857039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2247547 ] 00:07:47.291 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.291 [2024-07-10 23:11:55.959900] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.291 [2024-07-10 23:11:56.176994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.551 23:11:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:49.459 23:11:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.459 00:07:49.459 real 0m2.562s 00:07:49.459 user 0m2.405s 00:07:49.459 sys 0m0.171s 00:07:49.459 23:11:58 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.459 23:11:58 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:49.459 ************************************ 00:07:49.459 END TEST accel_fill 00:07:49.459 ************************************ 00:07:49.459 23:11:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.459 23:11:58 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:49.459 23:11:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:49.459 23:11:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.459 23:11:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.459 ************************************ 00:07:49.459 START TEST accel_copy_crc32c 00:07:49.459 ************************************ 00:07:49.459 23:11:58 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:49.459 23:11:58 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:49.459 23:11:58 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:49.459 23:11:58 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:49.459 23:11:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.459 23:11:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.459 23:11:58 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:49.459 23:11:58 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:49.459 23:11:58 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.459 23:11:58 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.459 23:11:58 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.459 23:11:58 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.459 23:11:58 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.459 23:11:58 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:49.459 23:11:58 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:49.459 [2024-07-10 23:11:58.464376] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:49.459 [2024-07-10 23:11:58.464456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2247952 ] 00:07:49.459 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.719 [2024-07-10 23:11:58.565799] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.978 [2024-07-10 23:11:58.790073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.978 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.979 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.979 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.979 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.979 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.979 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.979 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.979 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.979 23:11:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.886 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.145 00:07:52.145 real 0m2.531s 00:07:52.145 user 0m2.365s 00:07:52.145 sys 0m0.180s 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.145 23:12:00 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:52.145 ************************************ 00:07:52.145 END TEST accel_copy_crc32c 00:07:52.145 ************************************ 00:07:52.145 23:12:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:52.145 23:12:00 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:52.145 23:12:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:52.145 23:12:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.145 23:12:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.145 ************************************ 00:07:52.145 START TEST accel_copy_crc32c_C2 00:07:52.145 ************************************ 00:07:52.145 23:12:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:52.145 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.145 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:52.145 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.145 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.145 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:52.145 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:52.145 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.145 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.145 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.145 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.145 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.145 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.145 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:52.145 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:52.145 [2024-07-10 23:12:01.069488] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:52.145 [2024-07-10 23:12:01.069569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2248487 ] 00:07:52.145 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.145 [2024-07-10 23:12:01.171468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.403 [2024-07-10 23:12:01.384449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.662 23:12:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.567 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.567 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.567 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.567 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.567 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.567 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.568 00:07:54.568 real 0m2.554s 00:07:54.568 user 0m2.387s 00:07:54.568 sys 0m0.179s 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.568 23:12:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:54.568 ************************************ 00:07:54.568 END TEST accel_copy_crc32c_C2 00:07:54.568 ************************************ 00:07:54.568 23:12:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:54.568 23:12:03 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:54.568 23:12:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:54.568 23:12:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.568 23:12:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.827 ************************************ 00:07:54.827 START TEST accel_dualcast 00:07:54.827 ************************************ 00:07:54.827 23:12:03 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:54.827 23:12:03 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:54.827 23:12:03 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:54.827 23:12:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.827 23:12:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.827 23:12:03 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:54.827 23:12:03 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:54.827 23:12:03 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:54.827 23:12:03 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.827 23:12:03 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.827 23:12:03 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.827 23:12:03 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.827 23:12:03 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.827 23:12:03 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:54.827 23:12:03 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:54.827 [2024-07-10 23:12:03.686945] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:54.827 [2024-07-10 23:12:03.687027] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2249027 ] 00:07:54.827 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.827 [2024-07-10 23:12:03.787823] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.086 [2024-07-10 23:12:03.997426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.345 23:12:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:57.268 23:12:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.268 00:07:57.268 real 0m2.569s 00:07:57.268 user 0m2.391s 00:07:57.268 sys 0m0.188s 00:07:57.268 23:12:06 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.268 23:12:06 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:57.268 ************************************ 00:07:57.268 END TEST accel_dualcast 00:07:57.268 ************************************ 00:07:57.268 23:12:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:57.268 23:12:06 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:57.268 23:12:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:57.268 23:12:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.268 23:12:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.268 ************************************ 00:07:57.268 START TEST accel_compare 00:07:57.268 ************************************ 00:07:57.268 23:12:06 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:57.268 23:12:06 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:57.268 23:12:06 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:57.268 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.268 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.268 23:12:06 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:57.268 23:12:06 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:57.268 23:12:06 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:57.268 23:12:06 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.268 23:12:06 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.268 23:12:06 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.268 23:12:06 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.268 23:12:06 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.268 23:12:06 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:57.268 23:12:06 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:57.268 [2024-07-10 23:12:06.309294] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:57.268 [2024-07-10 23:12:06.309390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2249508 ] 00:07:57.526 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.526 [2024-07-10 23:12:06.409951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.785 [2024-07-10 23:12:06.622340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.785 23:12:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.785 23:12:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.785 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.785 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.785 23:12:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.785 23:12:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.785 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.785 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.785 23:12:06 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:57.785 23:12:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.785 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.785 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.044 23:12:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.949 23:12:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.949 23:12:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.949 23:12:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.949 23:12:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.949 23:12:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.949 23:12:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.949 23:12:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.949 23:12:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.949 23:12:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.949 23:12:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.949 23:12:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.949 23:12:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.949 23:12:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.949 23:12:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.949 23:12:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.949 23:12:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.950 23:12:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.950 23:12:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.950 23:12:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.950 23:12:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.950 23:12:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.950 23:12:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.950 23:12:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.950 23:12:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.950 23:12:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:59.950 23:12:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:59.950 23:12:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.950 00:07:59.950 real 0m2.580s 00:07:59.950 user 0m2.403s 00:07:59.950 sys 0m0.189s 00:07:59.950 23:12:08 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.950 23:12:08 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:59.950 ************************************ 00:07:59.950 END TEST accel_compare 00:07:59.950 ************************************ 00:07:59.950 23:12:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:59.950 23:12:08 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:59.950 23:12:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:59.950 23:12:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.950 23:12:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:59.950 ************************************ 00:07:59.950 START TEST accel_xor 00:07:59.950 ************************************ 00:07:59.950 23:12:08 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:59.950 23:12:08 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:59.950 23:12:08 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:59.950 23:12:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.950 23:12:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.950 23:12:08 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:59.950 23:12:08 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:59.950 23:12:08 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:59.950 23:12:08 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:59.950 23:12:08 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:59.950 23:12:08 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.950 23:12:08 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.950 23:12:08 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:59.950 23:12:08 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:59.950 23:12:08 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:59.950 [2024-07-10 23:12:08.953606] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:59.950 [2024-07-10 23:12:08.953683] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2250229 ] 00:07:59.950 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.209 [2024-07-10 23:12:09.054799] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.209 [2024-07-10 23:12:09.269015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.468 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.469 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.469 23:12:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.469 23:12:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.469 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.469 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.469 23:12:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.469 23:12:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.469 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.469 23:12:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.001 00:08:03.001 real 0m2.576s 00:08:03.001 user 0m2.401s 00:08:03.001 sys 0m0.185s 00:08:03.001 23:12:11 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.001 23:12:11 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:03.001 ************************************ 00:08:03.001 END TEST accel_xor 00:08:03.001 ************************************ 00:08:03.001 23:12:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:03.001 23:12:11 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:03.001 23:12:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:03.001 23:12:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.001 23:12:11 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.001 ************************************ 00:08:03.001 START TEST accel_xor 00:08:03.001 ************************************ 00:08:03.001 23:12:11 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:03.001 23:12:11 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:03.001 [2024-07-10 23:12:11.575700] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:03.001 [2024-07-10 23:12:11.575792] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2250853 ] 00:08:03.001 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.001 [2024-07-10 23:12:11.676091] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.001 [2024-07-10 23:12:11.882009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.260 23:12:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.261 23:12:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:05.166 23:12:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.166 00:08:05.166 real 0m2.561s 00:08:05.166 user 0m2.387s 00:08:05.166 sys 0m0.185s 00:08:05.166 23:12:14 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.166 23:12:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:05.166 ************************************ 00:08:05.166 END TEST accel_xor 00:08:05.166 ************************************ 00:08:05.166 23:12:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:05.166 23:12:14 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:05.166 23:12:14 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:05.166 23:12:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.166 23:12:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:05.166 ************************************ 00:08:05.166 START TEST accel_dif_verify 00:08:05.166 ************************************ 00:08:05.166 23:12:14 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:05.166 23:12:14 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:05.166 23:12:14 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:05.166 23:12:14 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:05.166 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.166 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.166 23:12:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:05.166 23:12:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:05.166 23:12:14 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.166 23:12:14 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.166 23:12:14 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.166 23:12:14 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.166 23:12:14 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:05.166 23:12:14 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:05.166 23:12:14 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:05.166 [2024-07-10 23:12:14.175636] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:05.166 [2024-07-10 23:12:14.175726] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251332 ] 00:08:05.166 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.426 [2024-07-10 23:12:14.275571] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.426 [2024-07-10 23:12:14.481083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.685 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.685 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.685 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.685 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.685 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.685 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.685 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.685 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.686 23:12:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:08.222 23:12:16 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.222 00:08:08.222 real 0m2.557s 00:08:08.222 user 0m2.396s 00:08:08.222 sys 0m0.175s 00:08:08.222 23:12:16 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.222 23:12:16 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:08.222 ************************************ 00:08:08.222 END TEST accel_dif_verify 00:08:08.222 ************************************ 00:08:08.222 23:12:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:08.222 23:12:16 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:08.222 23:12:16 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:08.222 23:12:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.222 23:12:16 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.222 ************************************ 00:08:08.222 START TEST accel_dif_generate 00:08:08.222 ************************************ 00:08:08.222 23:12:16 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:08.222 23:12:16 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:08.222 23:12:16 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:08.222 23:12:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.222 23:12:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.222 23:12:16 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:08.222 23:12:16 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:08.222 23:12:16 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:08.222 23:12:16 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.222 23:12:16 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.222 23:12:16 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.222 23:12:16 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.222 23:12:16 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.222 23:12:16 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:08.222 23:12:16 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:08.222 [2024-07-10 23:12:16.811990] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:08.222 [2024-07-10 23:12:16.812068] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251807 ] 00:08:08.222 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.222 [2024-07-10 23:12:16.910970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.222 [2024-07-10 23:12:17.120886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.481 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.482 23:12:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:10.388 23:12:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.388 00:08:10.388 real 0m2.564s 00:08:10.388 user 0m2.411s 00:08:10.388 sys 0m0.167s 00:08:10.388 23:12:19 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.388 23:12:19 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:10.388 ************************************ 00:08:10.388 END TEST accel_dif_generate 00:08:10.388 ************************************ 00:08:10.388 23:12:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:10.388 23:12:19 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:10.388 23:12:19 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:10.388 23:12:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.388 23:12:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.388 ************************************ 00:08:10.388 START TEST accel_dif_generate_copy 00:08:10.388 ************************************ 00:08:10.388 23:12:19 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:10.388 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:10.388 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:10.388 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.388 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.388 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:10.388 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:10.388 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:10.388 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.388 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.388 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.388 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.388 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.388 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:10.388 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:10.388 [2024-07-10 23:12:19.440653] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:10.388 [2024-07-10 23:12:19.440737] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2252287 ] 00:08:10.648 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.648 [2024-07-10 23:12:19.540837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.907 [2024-07-10 23:12:19.746969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.167 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.168 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:11.168 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.168 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.168 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.168 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:11.168 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.168 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.168 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.168 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.168 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.168 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.168 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.168 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:11.168 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.168 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.168 23:12:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.168 23:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.168 23:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.168 23:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.168 23:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.168 23:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.168 23:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.168 23:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.168 23:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:13.075 23:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.075 00:08:13.075 real 0m2.571s 00:08:13.075 user 0m2.394s 00:08:13.075 sys 0m0.191s 00:08:13.076 23:12:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.076 23:12:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:13.076 ************************************ 00:08:13.076 END TEST accel_dif_generate_copy 00:08:13.076 ************************************ 00:08:13.076 23:12:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:13.076 23:12:21 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:13.076 23:12:21 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:13.076 23:12:21 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:13.076 23:12:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.076 23:12:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.076 ************************************ 00:08:13.076 START TEST accel_comp 00:08:13.076 ************************************ 00:08:13.076 23:12:22 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:13.076 23:12:22 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:13.076 23:12:22 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:13.076 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.076 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.076 23:12:22 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:13.076 23:12:22 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:13.076 23:12:22 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:13.076 23:12:22 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.076 23:12:22 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.076 23:12:22 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.076 23:12:22 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.076 23:12:22 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.076 23:12:22 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:13.076 23:12:22 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:13.076 [2024-07-10 23:12:22.070869] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:13.076 [2024-07-10 23:12:22.070952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2252722 ] 00:08:13.076 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.335 [2024-07-10 23:12:22.172505] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.335 [2024-07-10 23:12:22.383438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.595 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.596 23:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:16.131 23:12:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.131 00:08:16.131 real 0m2.555s 00:08:16.131 user 0m2.377s 00:08:16.131 sys 0m0.193s 00:08:16.131 23:12:24 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.131 23:12:24 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:16.131 ************************************ 00:08:16.131 END TEST accel_comp 00:08:16.131 ************************************ 00:08:16.131 23:12:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:16.131 23:12:24 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:16.131 23:12:24 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:16.131 23:12:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.131 23:12:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.131 ************************************ 00:08:16.131 START TEST accel_decomp 00:08:16.131 ************************************ 00:08:16.131 23:12:24 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:16.131 23:12:24 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:16.131 23:12:24 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:16.131 23:12:24 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:16.131 23:12:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.131 23:12:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.131 23:12:24 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:16.131 23:12:24 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:16.131 23:12:24 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.131 23:12:24 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.131 23:12:24 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.131 23:12:24 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.131 23:12:24 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.131 23:12:24 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:16.131 23:12:24 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:16.131 [2024-07-10 23:12:24.673654] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:16.131 [2024-07-10 23:12:24.673731] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253119 ] 00:08:16.131 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.131 [2024-07-10 23:12:24.772642] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.131 [2024-07-10 23:12:24.980356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:16.389 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.390 23:12:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:18.353 23:12:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.353 00:08:18.353 real 0m2.555s 00:08:18.353 user 0m2.387s 00:08:18.353 sys 0m0.183s 00:08:18.353 23:12:27 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.353 23:12:27 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:18.353 ************************************ 00:08:18.353 END TEST accel_decomp 00:08:18.353 ************************************ 00:08:18.353 23:12:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:18.353 23:12:27 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:18.353 23:12:27 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:18.353 23:12:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.353 23:12:27 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.353 ************************************ 00:08:18.353 START TEST accel_decomp_full 00:08:18.353 ************************************ 00:08:18.353 23:12:27 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:18.353 23:12:27 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:18.353 23:12:27 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:18.353 23:12:27 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:18.353 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.353 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.353 23:12:27 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:18.353 23:12:27 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:18.353 23:12:27 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.353 23:12:27 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.353 23:12:27 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.353 23:12:27 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.353 23:12:27 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.353 23:12:27 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:18.353 23:12:27 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:18.353 [2024-07-10 23:12:27.284103] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:18.353 [2024-07-10 23:12:27.284202] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253533 ] 00:08:18.353 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.353 [2024-07-10 23:12:27.384948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.629 [2024-07-10 23:12:27.608900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.887 23:12:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:20.789 23:12:29 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.789 00:08:20.789 real 0m2.578s 00:08:20.789 user 0m2.409s 00:08:20.789 sys 0m0.182s 00:08:20.789 23:12:29 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.789 23:12:29 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:20.789 ************************************ 00:08:20.789 END TEST accel_decomp_full 00:08:20.789 ************************************ 00:08:21.048 23:12:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:21.048 23:12:29 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:21.048 23:12:29 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:21.048 23:12:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.048 23:12:29 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.048 ************************************ 00:08:21.048 START TEST accel_decomp_mcore 00:08:21.048 ************************************ 00:08:21.048 23:12:29 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:21.048 23:12:29 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:21.048 23:12:29 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:21.048 23:12:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.048 23:12:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.048 23:12:29 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:21.048 23:12:29 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:21.048 23:12:29 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:21.048 23:12:29 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.048 23:12:29 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.048 23:12:29 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.048 23:12:29 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.048 23:12:29 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.048 23:12:29 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:21.048 23:12:29 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:21.048 [2024-07-10 23:12:29.933431] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:21.048 [2024-07-10 23:12:29.933511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2254004 ] 00:08:21.048 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.048 [2024-07-10 23:12:30.039050] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.308 [2024-07-10 23:12:30.281699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.308 [2024-07-10 23:12:30.281780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.308 [2024-07-10 23:12:30.281880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.308 [2024-07-10 23:12:30.281889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.567 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.567 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.567 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.567 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.567 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.567 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.567 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.567 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.567 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.567 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.567 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.568 23:12:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:24.103 00:08:24.103 real 0m2.679s 00:08:24.103 user 0m8.070s 00:08:24.103 sys 0m0.202s 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.103 23:12:32 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:24.103 ************************************ 00:08:24.103 END TEST accel_decomp_mcore 00:08:24.103 ************************************ 00:08:24.103 23:12:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:24.103 23:12:32 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:24.103 23:12:32 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:24.103 23:12:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.103 23:12:32 accel -- common/autotest_common.sh@10 -- # set +x 00:08:24.103 ************************************ 00:08:24.103 START TEST accel_decomp_full_mcore 00:08:24.103 ************************************ 00:08:24.103 23:12:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:24.103 23:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:24.103 23:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:24.103 23:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.103 23:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.103 23:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:24.103 23:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:24.103 23:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:24.103 23:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:24.103 23:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:24.103 23:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.103 23:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.103 23:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:24.103 23:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:24.103 23:12:32 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:24.103 [2024-07-10 23:12:32.675189] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:24.103 [2024-07-10 23:12:32.675267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2254484 ] 00:08:24.103 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.103 [2024-07-10 23:12:32.776542] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.103 [2024-07-10 23:12:32.996646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.103 [2024-07-10 23:12:32.996720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.103 [2024-07-10 23:12:32.996777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.103 [2024-07-10 23:12:32.996800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.362 23:12:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.266 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.525 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.525 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.525 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.525 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:26.526 00:08:26.526 real 0m2.712s 00:08:26.526 user 0m8.267s 00:08:26.526 sys 0m0.205s 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.526 23:12:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:26.526 ************************************ 00:08:26.526 END TEST accel_decomp_full_mcore 00:08:26.526 ************************************ 00:08:26.526 23:12:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:26.526 23:12:35 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:26.526 23:12:35 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:26.526 23:12:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.526 23:12:35 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.526 ************************************ 00:08:26.526 START TEST accel_decomp_mthread 00:08:26.526 ************************************ 00:08:26.526 23:12:35 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:26.526 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:26.526 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:26.526 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.526 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.526 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:26.526 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:26.526 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:26.526 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.526 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.526 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.526 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.526 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.526 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:26.526 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:26.526 [2024-07-10 23:12:35.449991] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:26.526 [2024-07-10 23:12:35.450089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2254969 ] 00:08:26.526 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.526 [2024-07-10 23:12:35.552152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.785 [2024-07-10 23:12:35.762006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.045 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:27.045 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.045 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.045 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.045 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:27.045 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.045 23:12:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:27.045 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.046 23:12:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.948 00:08:28.948 real 0m2.573s 00:08:28.948 user 0m2.392s 00:08:28.948 sys 0m0.196s 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.948 23:12:37 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:28.948 ************************************ 00:08:28.948 END TEST accel_decomp_mthread 00:08:28.948 ************************************ 00:08:28.948 23:12:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:28.948 23:12:38 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:28.948 23:12:38 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:28.948 23:12:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.948 23:12:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:29.207 ************************************ 00:08:29.207 START TEST accel_decomp_full_mthread 00:08:29.207 ************************************ 00:08:29.207 23:12:38 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.207 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:29.207 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:29.207 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.207 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.207 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.207 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.207 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:29.207 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:29.207 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:29.207 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:29.207 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:29.207 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:29.207 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:29.207 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:29.207 [2024-07-10 23:12:38.065680] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:29.207 [2024-07-10 23:12:38.065757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255442 ] 00:08:29.207 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.207 [2024-07-10 23:12:38.161208] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.466 [2024-07-10 23:12:38.375979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.725 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.726 23:12:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.631 00:08:31.631 real 0m2.596s 00:08:31.631 user 0m2.425s 00:08:31.631 sys 0m0.185s 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.631 23:12:40 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:31.631 ************************************ 00:08:31.631 END TEST accel_decomp_full_mthread 00:08:31.631 ************************************ 00:08:31.631 23:12:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:31.631 23:12:40 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:31.631 23:12:40 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:31.631 23:12:40 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:31.631 23:12:40 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:31.631 23:12:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.631 23:12:40 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.631 23:12:40 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.631 23:12:40 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.631 23:12:40 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.631 23:12:40 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.631 23:12:40 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.632 23:12:40 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:31.632 23:12:40 accel -- accel/accel.sh@41 -- # jq -r . 00:08:31.632 ************************************ 00:08:31.632 START TEST accel_dif_functional_tests 00:08:31.632 ************************************ 00:08:31.632 23:12:40 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:31.891 [2024-07-10 23:12:40.763032] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:31.891 [2024-07-10 23:12:40.763120] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255925 ] 00:08:31.891 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.891 [2024-07-10 23:12:40.863772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.150 [2024-07-10 23:12:41.075987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.150 [2024-07-10 23:12:41.076053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.150 [2024-07-10 23:12:41.076059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.410 00:08:32.410 00:08:32.410 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.410 http://cunit.sourceforge.net/ 00:08:32.410 00:08:32.410 00:08:32.410 Suite: accel_dif 00:08:32.410 Test: verify: DIF generated, GUARD check ...passed 00:08:32.410 Test: verify: DIF generated, APPTAG check ...passed 00:08:32.410 Test: verify: DIF generated, REFTAG check ...passed 00:08:32.410 Test: verify: DIF not generated, GUARD check ...[2024-07-10 23:12:41.442182] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:32.410 passed 00:08:32.410 Test: verify: DIF not generated, APPTAG check ...[2024-07-10 23:12:41.442256] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:32.410 passed 00:08:32.410 Test: verify: DIF not generated, REFTAG check ...[2024-07-10 23:12:41.442297] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:32.410 passed 00:08:32.410 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:32.410 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-10 23:12:41.442373] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:32.410 passed 00:08:32.410 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:32.410 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:32.410 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:32.410 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-10 23:12:41.442534] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:32.410 passed 00:08:32.410 Test: verify copy: DIF generated, GUARD check ...passed 00:08:32.410 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:32.410 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:32.410 Test: verify copy: DIF not generated, GUARD check ...[2024-07-10 23:12:41.442711] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:32.410 passed 00:08:32.410 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-10 23:12:41.442762] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:32.410 passed 00:08:32.410 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-10 23:12:41.442806] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:32.410 passed 00:08:32.411 Test: generate copy: DIF generated, GUARD check ...passed 00:08:32.411 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:32.411 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:32.411 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:32.411 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:32.411 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:32.411 Test: generate copy: iovecs-len validate ...[2024-07-10 23:12:41.443105] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:32.411 passed 00:08:32.411 Test: generate copy: buffer alignment validate ...passed 00:08:32.411 00:08:32.411 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.411 suites 1 1 n/a 0 0 00:08:32.411 tests 26 26 26 0 0 00:08:32.411 asserts 115 115 115 0 n/a 00:08:32.411 00:08:32.411 Elapsed time = 0.005 seconds 00:08:33.790 00:08:33.790 real 0m2.041s 00:08:33.790 user 0m4.296s 00:08:33.790 sys 0m0.211s 00:08:33.790 23:12:42 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.790 23:12:42 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:33.790 ************************************ 00:08:33.790 END TEST accel_dif_functional_tests 00:08:33.790 ************************************ 00:08:33.790 23:12:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:33.790 00:08:33.790 real 1m2.373s 00:08:33.790 user 1m11.303s 00:08:33.790 sys 0m5.854s 00:08:33.790 23:12:42 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.790 23:12:42 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.790 ************************************ 00:08:33.790 END TEST accel 00:08:33.790 ************************************ 00:08:33.790 23:12:42 -- common/autotest_common.sh@1142 -- # return 0 00:08:33.790 23:12:42 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:33.790 23:12:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:33.790 23:12:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.790 23:12:42 -- common/autotest_common.sh@10 -- # set +x 00:08:33.790 ************************************ 00:08:33.790 START TEST accel_rpc 00:08:33.790 ************************************ 00:08:33.790 23:12:42 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:34.050 * Looking for test storage... 00:08:34.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:34.050 23:12:42 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:34.050 23:12:42 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2256437 00:08:34.050 23:12:42 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2256437 00:08:34.050 23:12:42 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2256437 ']' 00:08:34.050 23:12:42 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:34.050 23:12:42 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.050 23:12:42 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:34.050 23:12:42 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.050 23:12:42 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:34.050 23:12:42 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.050 [2024-07-10 23:12:42.993271] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:34.050 [2024-07-10 23:12:42.993369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2256437 ] 00:08:34.050 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.050 [2024-07-10 23:12:43.096045] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.310 [2024-07-10 23:12:43.305549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.878 23:12:43 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:34.878 23:12:43 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:34.878 23:12:43 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:34.878 23:12:43 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:34.878 23:12:43 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:34.878 23:12:43 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:34.878 23:12:43 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:34.878 23:12:43 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:34.878 23:12:43 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.878 23:12:43 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.878 ************************************ 00:08:34.878 START TEST accel_assign_opcode 00:08:34.878 ************************************ 00:08:34.878 23:12:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:34.878 23:12:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:34.878 23:12:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.878 23:12:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:34.878 [2024-07-10 23:12:43.795338] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:34.878 23:12:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.878 23:12:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:34.878 23:12:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.878 23:12:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:34.878 [2024-07-10 23:12:43.803337] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:34.878 23:12:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.878 23:12:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:34.878 23:12:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.878 23:12:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:35.815 23:12:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.815 23:12:44 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:35.815 23:12:44 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:35.815 23:12:44 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:35.815 23:12:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.815 23:12:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:35.815 23:12:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.815 software 00:08:35.815 00:08:35.815 real 0m0.901s 00:08:35.815 user 0m0.044s 00:08:35.815 sys 0m0.009s 00:08:35.815 23:12:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.815 23:12:44 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:35.815 ************************************ 00:08:35.815 END TEST accel_assign_opcode 00:08:35.815 ************************************ 00:08:35.815 23:12:44 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:35.815 23:12:44 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2256437 00:08:35.815 23:12:44 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2256437 ']' 00:08:35.815 23:12:44 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2256437 00:08:35.815 23:12:44 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:35.815 23:12:44 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:35.815 23:12:44 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2256437 00:08:35.815 23:12:44 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:35.815 23:12:44 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:35.815 23:12:44 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2256437' 00:08:35.815 killing process with pid 2256437 00:08:35.815 23:12:44 accel_rpc -- common/autotest_common.sh@967 -- # kill 2256437 00:08:35.815 23:12:44 accel_rpc -- common/autotest_common.sh@972 -- # wait 2256437 00:08:38.352 00:08:38.352 real 0m4.335s 00:08:38.352 user 0m4.280s 00:08:38.352 sys 0m0.526s 00:08:38.352 23:12:47 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.352 23:12:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.352 ************************************ 00:08:38.352 END TEST accel_rpc 00:08:38.352 ************************************ 00:08:38.352 23:12:47 -- common/autotest_common.sh@1142 -- # return 0 00:08:38.352 23:12:47 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:38.352 23:12:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:38.352 23:12:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.352 23:12:47 -- common/autotest_common.sh@10 -- # set +x 00:08:38.352 ************************************ 00:08:38.352 START TEST app_cmdline 00:08:38.352 ************************************ 00:08:38.352 23:12:47 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:38.352 * Looking for test storage... 00:08:38.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:38.352 23:12:47 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:38.352 23:12:47 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2257203 00:08:38.352 23:12:47 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2257203 00:08:38.352 23:12:47 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2257203 ']' 00:08:38.352 23:12:47 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.352 23:12:47 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.352 23:12:47 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.352 23:12:47 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.352 23:12:47 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:38.352 23:12:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:38.352 [2024-07-10 23:12:47.378780] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:38.352 [2024-07-10 23:12:47.378876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2257203 ] 00:08:38.612 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.612 [2024-07-10 23:12:47.482712] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.871 [2024-07-10 23:12:47.695286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.839 23:12:48 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.839 23:12:48 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:39.839 23:12:48 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:39.839 { 00:08:39.839 "version": "SPDK v24.09-pre git sha1 9937c0160", 00:08:39.839 "fields": { 00:08:39.839 "major": 24, 00:08:39.839 "minor": 9, 00:08:39.839 "patch": 0, 00:08:39.839 "suffix": "-pre", 00:08:39.839 "commit": "9937c0160" 00:08:39.839 } 00:08:39.839 } 00:08:39.839 23:12:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:39.839 23:12:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:39.839 23:12:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:39.839 23:12:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:39.839 23:12:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:39.839 23:12:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:39.839 23:12:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:39.839 23:12:48 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.839 23:12:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:39.839 23:12:48 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.839 23:12:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:39.839 23:12:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:39.839 23:12:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.839 23:12:48 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:39.839 23:12:48 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.839 23:12:48 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.839 23:12:48 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.839 23:12:48 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.839 23:12:48 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.839 23:12:48 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.839 23:12:48 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.839 23:12:48 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.839 23:12:48 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:39.839 23:12:48 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:40.097 request: 00:08:40.097 { 00:08:40.097 "method": "env_dpdk_get_mem_stats", 00:08:40.097 "req_id": 1 00:08:40.097 } 00:08:40.097 Got JSON-RPC error response 00:08:40.097 response: 00:08:40.097 { 00:08:40.097 "code": -32601, 00:08:40.097 "message": "Method not found" 00:08:40.097 } 00:08:40.097 23:12:49 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:40.097 23:12:49 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:40.097 23:12:49 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:40.097 23:12:49 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:40.097 23:12:49 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2257203 00:08:40.097 23:12:49 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2257203 ']' 00:08:40.097 23:12:49 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2257203 00:08:40.097 23:12:49 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:40.097 23:12:49 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:40.097 23:12:49 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2257203 00:08:40.097 23:12:49 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:40.097 23:12:49 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:40.097 23:12:49 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2257203' 00:08:40.097 killing process with pid 2257203 00:08:40.097 23:12:49 app_cmdline -- common/autotest_common.sh@967 -- # kill 2257203 00:08:40.097 23:12:49 app_cmdline -- common/autotest_common.sh@972 -- # wait 2257203 00:08:42.634 00:08:42.634 real 0m4.283s 00:08:42.634 user 0m4.505s 00:08:42.634 sys 0m0.522s 00:08:42.634 23:12:51 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.634 23:12:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:42.634 ************************************ 00:08:42.634 END TEST app_cmdline 00:08:42.634 ************************************ 00:08:42.634 23:12:51 -- common/autotest_common.sh@1142 -- # return 0 00:08:42.634 23:12:51 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:42.634 23:12:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:42.634 23:12:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.634 23:12:51 -- common/autotest_common.sh@10 -- # set +x 00:08:42.634 ************************************ 00:08:42.634 START TEST version 00:08:42.634 ************************************ 00:08:42.634 23:12:51 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:42.634 * Looking for test storage... 00:08:42.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:42.634 23:12:51 version -- app/version.sh@17 -- # get_header_version major 00:08:42.634 23:12:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:42.634 23:12:51 version -- app/version.sh@14 -- # cut -f2 00:08:42.634 23:12:51 version -- app/version.sh@14 -- # tr -d '"' 00:08:42.634 23:12:51 version -- app/version.sh@17 -- # major=24 00:08:42.634 23:12:51 version -- app/version.sh@18 -- # get_header_version minor 00:08:42.634 23:12:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:42.634 23:12:51 version -- app/version.sh@14 -- # cut -f2 00:08:42.634 23:12:51 version -- app/version.sh@14 -- # tr -d '"' 00:08:42.634 23:12:51 version -- app/version.sh@18 -- # minor=9 00:08:42.634 23:12:51 version -- app/version.sh@19 -- # get_header_version patch 00:08:42.634 23:12:51 version -- app/version.sh@14 -- # cut -f2 00:08:42.634 23:12:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:42.634 23:12:51 version -- app/version.sh@14 -- # tr -d '"' 00:08:42.634 23:12:51 version -- app/version.sh@19 -- # patch=0 00:08:42.634 23:12:51 version -- app/version.sh@20 -- # get_header_version suffix 00:08:42.634 23:12:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:42.634 23:12:51 version -- app/version.sh@14 -- # cut -f2 00:08:42.634 23:12:51 version -- app/version.sh@14 -- # tr -d '"' 00:08:42.634 23:12:51 version -- app/version.sh@20 -- # suffix=-pre 00:08:42.634 23:12:51 version -- app/version.sh@22 -- # version=24.9 00:08:42.634 23:12:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:42.634 23:12:51 version -- app/version.sh@28 -- # version=24.9rc0 00:08:42.895 23:12:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:42.895 23:12:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:42.895 23:12:51 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:42.895 23:12:51 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:42.895 00:08:42.895 real 0m0.154s 00:08:42.895 user 0m0.087s 00:08:42.895 sys 0m0.098s 00:08:42.895 23:12:51 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.895 23:12:51 version -- common/autotest_common.sh@10 -- # set +x 00:08:42.895 ************************************ 00:08:42.895 END TEST version 00:08:42.895 ************************************ 00:08:42.895 23:12:51 -- common/autotest_common.sh@1142 -- # return 0 00:08:42.895 23:12:51 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:42.895 23:12:51 -- spdk/autotest.sh@198 -- # uname -s 00:08:42.895 23:12:51 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:42.895 23:12:51 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:42.895 23:12:51 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:42.895 23:12:51 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:42.895 23:12:51 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:42.895 23:12:51 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:42.895 23:12:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:42.895 23:12:51 -- common/autotest_common.sh@10 -- # set +x 00:08:42.895 23:12:51 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:42.895 23:12:51 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:42.895 23:12:51 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:42.895 23:12:51 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:42.895 23:12:51 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:42.895 23:12:51 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:42.895 23:12:51 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:42.895 23:12:51 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:42.895 23:12:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.895 23:12:51 -- common/autotest_common.sh@10 -- # set +x 00:08:42.895 ************************************ 00:08:42.895 START TEST nvmf_tcp 00:08:42.895 ************************************ 00:08:42.895 23:12:51 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:42.895 * Looking for test storage... 00:08:42.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.895 23:12:51 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.895 23:12:51 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.895 23:12:51 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.895 23:12:51 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.895 23:12:51 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.895 23:12:51 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.895 23:12:51 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:42.895 23:12:51 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:42.895 23:12:51 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:42.895 23:12:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:42.895 23:12:51 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:42.895 23:12:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:42.896 23:12:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.896 23:12:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.155 ************************************ 00:08:43.155 START TEST nvmf_example 00:08:43.155 ************************************ 00:08:43.155 23:12:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:43.155 * Looking for test storage... 00:08:43.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.155 23:12:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:48.427 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:48.427 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.427 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:48.428 Found net devices under 0000:86:00.0: cvl_0_0 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:48.428 Found net devices under 0000:86:00.1: cvl_0_1 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:48.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:08:48.428 00:08:48.428 --- 10.0.0.2 ping statistics --- 00:08:48.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.428 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:08:48.428 00:08:48.428 --- 10.0.0.1 ping statistics --- 00:08:48.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.428 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2261056 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2261056 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2261056 ']' 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:48.428 23:12:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.428 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:49.364 23:12:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:49.623 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.611 Initializing NVMe Controllers 00:08:59.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:59.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:59.611 Initialization complete. Launching workers. 00:08:59.611 ======================================================== 00:08:59.611 Latency(us) 00:08:59.611 Device Information : IOPS MiB/s Average min max 00:08:59.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16011.82 62.55 3997.61 851.51 16445.08 00:08:59.611 ======================================================== 00:08:59.611 Total : 16011.82 62.55 3997.61 851.51 16445.08 00:08:59.611 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:59.870 rmmod nvme_tcp 00:08:59.870 rmmod nvme_fabrics 00:08:59.870 rmmod nvme_keyring 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2261056 ']' 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2261056 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2261056 ']' 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2261056 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2261056 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2261056' 00:08:59.870 killing process with pid 2261056 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 2261056 00:08:59.870 23:13:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 2261056 00:09:01.247 nvmf threads initialize successfully 00:09:01.247 bdev subsystem init successfully 00:09:01.247 created a nvmf target service 00:09:01.247 create targets's poll groups done 00:09:01.247 all subsystems of target started 00:09:01.247 nvmf target is running 00:09:01.247 all subsystems of target stopped 00:09:01.247 destroy targets's poll groups done 00:09:01.247 destroyed the nvmf target service 00:09:01.247 bdev subsystem finish successfully 00:09:01.247 nvmf threads destroy successfully 00:09:01.247 23:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:01.247 23:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:01.247 23:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:01.247 23:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:01.247 23:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:01.247 23:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.247 23:13:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.247 23:13:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.150 23:13:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:03.150 23:13:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:03.150 23:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:03.150 23:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:03.150 00:09:03.150 real 0m20.234s 00:09:03.150 user 0m49.240s 00:09:03.150 sys 0m5.569s 00:09:03.412 23:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.413 23:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:03.413 ************************************ 00:09:03.413 END TEST nvmf_example 00:09:03.413 ************************************ 00:09:03.413 23:13:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:03.413 23:13:12 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:03.413 23:13:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:03.413 23:13:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.413 23:13:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:03.413 ************************************ 00:09:03.413 START TEST nvmf_filesystem 00:09:03.413 ************************************ 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:03.413 * Looking for test storage... 00:09:03.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:03.413 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:03.414 #define SPDK_CONFIG_H 00:09:03.414 #define SPDK_CONFIG_APPS 1 00:09:03.414 #define SPDK_CONFIG_ARCH native 00:09:03.414 #define SPDK_CONFIG_ASAN 1 00:09:03.414 #undef SPDK_CONFIG_AVAHI 00:09:03.414 #undef SPDK_CONFIG_CET 00:09:03.414 #define SPDK_CONFIG_COVERAGE 1 00:09:03.414 #define SPDK_CONFIG_CROSS_PREFIX 00:09:03.414 #undef SPDK_CONFIG_CRYPTO 00:09:03.414 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:03.414 #undef SPDK_CONFIG_CUSTOMOCF 00:09:03.414 #undef SPDK_CONFIG_DAOS 00:09:03.414 #define SPDK_CONFIG_DAOS_DIR 00:09:03.414 #define SPDK_CONFIG_DEBUG 1 00:09:03.414 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:03.414 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:03.414 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:03.414 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:03.414 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:03.414 #undef SPDK_CONFIG_DPDK_UADK 00:09:03.414 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:03.414 #define SPDK_CONFIG_EXAMPLES 1 00:09:03.414 #undef SPDK_CONFIG_FC 00:09:03.414 #define SPDK_CONFIG_FC_PATH 00:09:03.414 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:03.414 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:03.414 #undef SPDK_CONFIG_FUSE 00:09:03.414 #undef SPDK_CONFIG_FUZZER 00:09:03.414 #define SPDK_CONFIG_FUZZER_LIB 00:09:03.414 #undef SPDK_CONFIG_GOLANG 00:09:03.414 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:03.414 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:03.414 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:03.414 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:03.414 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:03.414 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:03.414 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:03.414 #define SPDK_CONFIG_IDXD 1 00:09:03.414 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:03.414 #undef SPDK_CONFIG_IPSEC_MB 00:09:03.414 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:03.414 #define SPDK_CONFIG_ISAL 1 00:09:03.414 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:03.414 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:03.414 #define SPDK_CONFIG_LIBDIR 00:09:03.414 #undef SPDK_CONFIG_LTO 00:09:03.414 #define SPDK_CONFIG_MAX_LCORES 128 00:09:03.414 #define SPDK_CONFIG_NVME_CUSE 1 00:09:03.414 #undef SPDK_CONFIG_OCF 00:09:03.414 #define SPDK_CONFIG_OCF_PATH 00:09:03.414 #define SPDK_CONFIG_OPENSSL_PATH 00:09:03.414 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:03.414 #define SPDK_CONFIG_PGO_DIR 00:09:03.414 #undef SPDK_CONFIG_PGO_USE 00:09:03.414 #define SPDK_CONFIG_PREFIX /usr/local 00:09:03.414 #undef SPDK_CONFIG_RAID5F 00:09:03.414 #undef SPDK_CONFIG_RBD 00:09:03.414 #define SPDK_CONFIG_RDMA 1 00:09:03.414 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:03.414 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:03.414 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:03.414 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:03.414 #define SPDK_CONFIG_SHARED 1 00:09:03.414 #undef SPDK_CONFIG_SMA 00:09:03.414 #define SPDK_CONFIG_TESTS 1 00:09:03.414 #undef SPDK_CONFIG_TSAN 00:09:03.414 #define SPDK_CONFIG_UBLK 1 00:09:03.414 #define SPDK_CONFIG_UBSAN 1 00:09:03.414 #undef SPDK_CONFIG_UNIT_TESTS 00:09:03.414 #undef SPDK_CONFIG_URING 00:09:03.414 #define SPDK_CONFIG_URING_PATH 00:09:03.414 #undef SPDK_CONFIG_URING_ZNS 00:09:03.414 #undef SPDK_CONFIG_USDT 00:09:03.414 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:03.414 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:03.414 #undef SPDK_CONFIG_VFIO_USER 00:09:03.414 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:03.414 #define SPDK_CONFIG_VHOST 1 00:09:03.414 #define SPDK_CONFIG_VIRTIO 1 00:09:03.414 #undef SPDK_CONFIG_VTUNE 00:09:03.414 #define SPDK_CONFIG_VTUNE_DIR 00:09:03.414 #define SPDK_CONFIG_WERROR 1 00:09:03.414 #define SPDK_CONFIG_WPDK_DIR 00:09:03.414 #undef SPDK_CONFIG_XNVME 00:09:03.414 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:03.414 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:03.415 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2263697 ]] 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2263697 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.pge0YX 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:09:03.416 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.pge0YX/tests/target /tmp/spdk.pge0YX 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=189426302976 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974324224 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6548021248 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97982451712 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987162112 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185489920 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194865664 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986600960 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987162112 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=561152 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597426688 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597430784 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:09:03.417 * Looking for test storage... 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=189426302976 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8762613760 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.417 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:03.677 23:13:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:08.949 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:08.949 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:08.949 Found net devices under 0000:86:00.0: cvl_0_0 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:08.949 Found net devices under 0000:86:00.1: cvl_0_1 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:08.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:09:08.949 00:09:08.949 --- 10.0.0.2 ping statistics --- 00:09:08.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.949 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:09:08.949 00:09:08.949 --- 10.0.0.1 ping statistics --- 00:09:08.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.949 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.949 23:13:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.949 ************************************ 00:09:08.949 START TEST nvmf_filesystem_no_in_capsule 00:09:08.949 ************************************ 00:09:08.950 23:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:09:08.950 23:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:08.950 23:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:08.950 23:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.950 23:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.950 23:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.950 23:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2266711 00:09:08.950 23:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2266711 00:09:08.950 23:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.950 23:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2266711 ']' 00:09:08.950 23:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.950 23:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.950 23:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.950 23:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.950 23:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.950 [2024-07-10 23:13:17.754260] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:08.950 [2024-07-10 23:13:17.754361] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.950 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.950 [2024-07-10 23:13:17.861934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.209 [2024-07-10 23:13:18.077606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.209 [2024-07-10 23:13:18.077651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.209 [2024-07-10 23:13:18.077663] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.209 [2024-07-10 23:13:18.077672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.209 [2024-07-10 23:13:18.077682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.209 [2024-07-10 23:13:18.077756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.209 [2024-07-10 23:13:18.077833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.209 [2024-07-10 23:13:18.077892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.209 [2024-07-10 23:13:18.077902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.468 23:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.468 23:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:09.468 23:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.468 23:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:09.726 23:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.726 23:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.726 23:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:09.726 23:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:09.726 23:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.726 23:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.726 [2024-07-10 23:13:18.577914] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.726 23:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.726 23:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:09.726 23:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.726 23:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.296 Malloc1 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.296 [2024-07-10 23:13:19.217922] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:10.296 { 00:09:10.296 "name": "Malloc1", 00:09:10.296 "aliases": [ 00:09:10.296 "2787f79c-3069-4aee-8a5b-60a165e7c05f" 00:09:10.296 ], 00:09:10.296 "product_name": "Malloc disk", 00:09:10.296 "block_size": 512, 00:09:10.296 "num_blocks": 1048576, 00:09:10.296 "uuid": "2787f79c-3069-4aee-8a5b-60a165e7c05f", 00:09:10.296 "assigned_rate_limits": { 00:09:10.296 "rw_ios_per_sec": 0, 00:09:10.296 "rw_mbytes_per_sec": 0, 00:09:10.296 "r_mbytes_per_sec": 0, 00:09:10.296 "w_mbytes_per_sec": 0 00:09:10.296 }, 00:09:10.296 "claimed": true, 00:09:10.296 "claim_type": "exclusive_write", 00:09:10.296 "zoned": false, 00:09:10.296 "supported_io_types": { 00:09:10.296 "read": true, 00:09:10.296 "write": true, 00:09:10.296 "unmap": true, 00:09:10.296 "flush": true, 00:09:10.296 "reset": true, 00:09:10.296 "nvme_admin": false, 00:09:10.296 "nvme_io": false, 00:09:10.296 "nvme_io_md": false, 00:09:10.296 "write_zeroes": true, 00:09:10.296 "zcopy": true, 00:09:10.296 "get_zone_info": false, 00:09:10.296 "zone_management": false, 00:09:10.296 "zone_append": false, 00:09:10.296 "compare": false, 00:09:10.296 "compare_and_write": false, 00:09:10.296 "abort": true, 00:09:10.296 "seek_hole": false, 00:09:10.296 "seek_data": false, 00:09:10.296 "copy": true, 00:09:10.296 "nvme_iov_md": false 00:09:10.296 }, 00:09:10.296 "memory_domains": [ 00:09:10.296 { 00:09:10.296 "dma_device_id": "system", 00:09:10.296 "dma_device_type": 1 00:09:10.296 }, 00:09:10.296 { 00:09:10.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.296 "dma_device_type": 2 00:09:10.296 } 00:09:10.296 ], 00:09:10.296 "driver_specific": {} 00:09:10.296 } 00:09:10.296 ]' 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:10.296 23:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:11.712 23:13:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.712 23:13:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:11.712 23:13:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.712 23:13:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:11.712 23:13:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:13.617 23:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:14.994 ************************************ 00:09:14.994 START TEST filesystem_ext4 00:09:14.994 ************************************ 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:14.994 mke2fs 1.46.5 (30-Dec-2021) 00:09:14.994 Discarding device blocks: 0/522240 done 00:09:14.994 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:14.994 Filesystem UUID: ac3b587c-c587-488a-a524-656cc19a8d57 00:09:14.994 Superblock backups stored on blocks: 00:09:14.994 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:14.994 00:09:14.994 Allocating group tables: 0/64 done 00:09:14.994 Writing inode tables: 0/64 done 00:09:14.994 Creating journal (8192 blocks): done 00:09:14.994 Writing superblocks and filesystem accounting information: 0/64 done 00:09:14.994 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:14.994 23:13:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:15.562 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:15.562 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:15.562 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:15.562 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:15.562 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:15.562 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:15.562 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2266711 00:09:15.562 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:15.562 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:15.562 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:15.562 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:15.562 00:09:15.562 real 0m0.925s 00:09:15.562 user 0m0.035s 00:09:15.562 sys 0m0.054s 00:09:15.562 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.562 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:15.562 ************************************ 00:09:15.562 END TEST filesystem_ext4 00:09:15.562 ************************************ 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.821 ************************************ 00:09:15.821 START TEST filesystem_btrfs 00:09:15.821 ************************************ 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:15.821 btrfs-progs v6.6.2 00:09:15.821 See https://btrfs.readthedocs.io for more information. 00:09:15.821 00:09:15.821 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:15.821 NOTE: several default settings have changed in version 5.15, please make sure 00:09:15.821 this does not affect your deployments: 00:09:15.821 - DUP for metadata (-m dup) 00:09:15.821 - enabled no-holes (-O no-holes) 00:09:15.821 - enabled free-space-tree (-R free-space-tree) 00:09:15.821 00:09:15.821 Label: (null) 00:09:15.821 UUID: 91c3dc18-1f29-46b2-9f6a-3909c7bbf25a 00:09:15.821 Node size: 16384 00:09:15.821 Sector size: 4096 00:09:15.821 Filesystem size: 510.00MiB 00:09:15.821 Block group profiles: 00:09:15.821 Data: single 8.00MiB 00:09:15.821 Metadata: DUP 32.00MiB 00:09:15.821 System: DUP 8.00MiB 00:09:15.821 SSD detected: yes 00:09:15.821 Zoned device: no 00:09:15.821 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:15.821 Runtime features: free-space-tree 00:09:15.821 Checksum: crc32c 00:09:15.821 Number of devices: 1 00:09:15.821 Devices: 00:09:15.821 ID SIZE PATH 00:09:15.821 1 510.00MiB /dev/nvme0n1p1 00:09:15.821 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:15.821 23:13:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:16.080 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:16.080 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:16.080 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:16.080 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:16.080 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:16.080 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2266711 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:16.340 00:09:16.340 real 0m0.481s 00:09:16.340 user 0m0.033s 00:09:16.340 sys 0m0.111s 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:16.340 ************************************ 00:09:16.340 END TEST filesystem_btrfs 00:09:16.340 ************************************ 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.340 ************************************ 00:09:16.340 START TEST filesystem_xfs 00:09:16.340 ************************************ 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:16.340 23:13:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:16.340 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:16.340 = sectsz=512 attr=2, projid32bit=1 00:09:16.340 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:16.340 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:16.340 data = bsize=4096 blocks=130560, imaxpct=25 00:09:16.340 = sunit=0 swidth=0 blks 00:09:16.340 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:16.340 log =internal log bsize=4096 blocks=16384, version=2 00:09:16.340 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:16.340 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:17.278 Discarding blocks...Done. 00:09:17.278 23:13:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:17.278 23:13:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:19.814 23:13:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:19.814 23:13:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:19.814 23:13:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:19.814 23:13:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:19.814 23:13:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:19.814 23:13:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:19.814 23:13:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2266711 00:09:19.814 23:13:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:19.814 23:13:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:19.814 23:13:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:19.814 23:13:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:19.814 00:09:19.814 real 0m3.444s 00:09:19.814 user 0m0.031s 00:09:19.814 sys 0m0.064s 00:09:19.814 23:13:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.814 23:13:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:19.814 ************************************ 00:09:19.814 END TEST filesystem_xfs 00:09:19.814 ************************************ 00:09:19.814 23:13:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:19.814 23:13:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:20.073 23:13:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:20.073 23:13:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2266711 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2266711 ']' 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2266711 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2266711 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2266711' 00:09:20.333 killing process with pid 2266711 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2266711 00:09:20.333 23:13:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2266711 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:23.636 00:09:23.636 real 0m14.418s 00:09:23.636 user 0m54.350s 00:09:23.636 sys 0m1.323s 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.636 ************************************ 00:09:23.636 END TEST nvmf_filesystem_no_in_capsule 00:09:23.636 ************************************ 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:23.636 ************************************ 00:09:23.636 START TEST nvmf_filesystem_in_capsule 00:09:23.636 ************************************ 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2269238 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2269238 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2269238 ']' 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:23.636 23:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.636 [2024-07-10 23:13:32.243089] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:23.636 [2024-07-10 23:13:32.243180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.636 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.636 [2024-07-10 23:13:32.352015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:23.636 [2024-07-10 23:13:32.567834] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.636 [2024-07-10 23:13:32.567880] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.636 [2024-07-10 23:13:32.567892] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.636 [2024-07-10 23:13:32.567904] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.636 [2024-07-10 23:13:32.567916] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.636 [2024-07-10 23:13:32.567989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.636 [2024-07-10 23:13:32.568066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.636 [2024-07-10 23:13:32.568125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.636 [2024-07-10 23:13:32.568135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.205 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:24.205 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:24.205 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.205 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:24.205 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.205 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.205 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:24.205 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:24.205 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.205 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.205 [2024-07-10 23:13:33.061974] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.205 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.205 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:24.205 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.205 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.772 Malloc1 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.772 [2024-07-10 23:13:33.780028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:24.772 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:24.773 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:24.773 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.773 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.773 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.773 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:24.773 { 00:09:24.773 "name": "Malloc1", 00:09:24.773 "aliases": [ 00:09:24.773 "72b29a14-cfd7-4a6d-b505-8057fb1559b2" 00:09:24.773 ], 00:09:24.773 "product_name": "Malloc disk", 00:09:24.773 "block_size": 512, 00:09:24.773 "num_blocks": 1048576, 00:09:24.773 "uuid": "72b29a14-cfd7-4a6d-b505-8057fb1559b2", 00:09:24.773 "assigned_rate_limits": { 00:09:24.773 "rw_ios_per_sec": 0, 00:09:24.773 "rw_mbytes_per_sec": 0, 00:09:24.773 "r_mbytes_per_sec": 0, 00:09:24.773 "w_mbytes_per_sec": 0 00:09:24.773 }, 00:09:24.773 "claimed": true, 00:09:24.773 "claim_type": "exclusive_write", 00:09:24.773 "zoned": false, 00:09:24.773 "supported_io_types": { 00:09:24.773 "read": true, 00:09:24.773 "write": true, 00:09:24.773 "unmap": true, 00:09:24.773 "flush": true, 00:09:24.773 "reset": true, 00:09:24.773 "nvme_admin": false, 00:09:24.773 "nvme_io": false, 00:09:24.773 "nvme_io_md": false, 00:09:24.773 "write_zeroes": true, 00:09:24.773 "zcopy": true, 00:09:24.773 "get_zone_info": false, 00:09:24.773 "zone_management": false, 00:09:24.773 "zone_append": false, 00:09:24.773 "compare": false, 00:09:24.773 "compare_and_write": false, 00:09:24.773 "abort": true, 00:09:24.773 "seek_hole": false, 00:09:24.773 "seek_data": false, 00:09:24.773 "copy": true, 00:09:24.773 "nvme_iov_md": false 00:09:24.773 }, 00:09:24.773 "memory_domains": [ 00:09:24.773 { 00:09:24.773 "dma_device_id": "system", 00:09:24.773 "dma_device_type": 1 00:09:24.773 }, 00:09:24.773 { 00:09:24.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.773 "dma_device_type": 2 00:09:24.773 } 00:09:24.773 ], 00:09:24.773 "driver_specific": {} 00:09:24.773 } 00:09:24.773 ]' 00:09:24.773 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:25.031 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:25.031 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:25.031 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:25.031 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:25.031 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:25.031 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:25.031 23:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.966 23:13:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:25.966 23:13:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:25.966 23:13:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:25.966 23:13:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:25.966 23:13:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:28.504 23:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:28.504 23:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:28.504 23:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:28.504 23:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:28.504 23:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:28.504 23:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:28.504 23:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:28.504 23:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:28.504 23:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:28.504 23:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:28.504 23:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:28.504 23:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:28.504 23:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:28.504 23:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:28.504 23:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:28.504 23:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:28.504 23:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:28.504 23:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:29.072 23:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:30.008 23:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:30.008 23:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:30.008 23:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:30.008 23:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.008 23:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.008 ************************************ 00:09:30.008 START TEST filesystem_in_capsule_ext4 00:09:30.008 ************************************ 00:09:30.008 23:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:30.008 23:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:30.008 23:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:30.008 23:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:30.008 23:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:30.008 23:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:30.008 23:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:30.008 23:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:30.008 23:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:30.008 23:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:30.008 23:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:30.008 mke2fs 1.46.5 (30-Dec-2021) 00:09:30.008 Discarding device blocks: 0/522240 done 00:09:30.009 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:30.009 Filesystem UUID: 3e41b1d4-06c2-4025-96d2-41eb28679083 00:09:30.009 Superblock backups stored on blocks: 00:09:30.009 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:30.009 00:09:30.009 Allocating group tables: 0/64 done 00:09:30.268 Writing inode tables: 0/64 done 00:09:30.268 Creating journal (8192 blocks): done 00:09:31.351 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:09:31.351 00:09:31.351 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:31.351 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:31.351 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:31.351 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:31.351 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:31.351 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:31.351 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:31.351 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:31.351 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2269238 00:09:31.351 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:31.351 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:31.610 00:09:31.610 real 0m1.452s 00:09:31.610 user 0m0.027s 00:09:31.610 sys 0m0.063s 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:31.610 ************************************ 00:09:31.610 END TEST filesystem_in_capsule_ext4 00:09:31.610 ************************************ 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:31.610 ************************************ 00:09:31.610 START TEST filesystem_in_capsule_btrfs 00:09:31.610 ************************************ 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:31.610 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:32.177 btrfs-progs v6.6.2 00:09:32.177 See https://btrfs.readthedocs.io for more information. 00:09:32.177 00:09:32.177 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:32.177 NOTE: several default settings have changed in version 5.15, please make sure 00:09:32.177 this does not affect your deployments: 00:09:32.177 - DUP for metadata (-m dup) 00:09:32.177 - enabled no-holes (-O no-holes) 00:09:32.177 - enabled free-space-tree (-R free-space-tree) 00:09:32.177 00:09:32.177 Label: (null) 00:09:32.177 UUID: 843000c2-2d26-49ed-a85b-d4209c9c7732 00:09:32.177 Node size: 16384 00:09:32.177 Sector size: 4096 00:09:32.177 Filesystem size: 510.00MiB 00:09:32.177 Block group profiles: 00:09:32.177 Data: single 8.00MiB 00:09:32.177 Metadata: DUP 32.00MiB 00:09:32.177 System: DUP 8.00MiB 00:09:32.177 SSD detected: yes 00:09:32.177 Zoned device: no 00:09:32.177 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:32.177 Runtime features: free-space-tree 00:09:32.177 Checksum: crc32c 00:09:32.177 Number of devices: 1 00:09:32.177 Devices: 00:09:32.177 ID SIZE PATH 00:09:32.177 1 510.00MiB /dev/nvme0n1p1 00:09:32.177 00:09:32.177 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:32.177 23:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2269238 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:32.436 00:09:32.436 real 0m0.803s 00:09:32.436 user 0m0.024s 00:09:32.436 sys 0m0.130s 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:32.436 ************************************ 00:09:32.436 END TEST filesystem_in_capsule_btrfs 00:09:32.436 ************************************ 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:32.436 ************************************ 00:09:32.436 START TEST filesystem_in_capsule_xfs 00:09:32.436 ************************************ 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:32.436 23:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:32.436 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:32.436 = sectsz=512 attr=2, projid32bit=1 00:09:32.436 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:32.437 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:32.437 data = bsize=4096 blocks=130560, imaxpct=25 00:09:32.437 = sunit=0 swidth=0 blks 00:09:32.437 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:32.437 log =internal log bsize=4096 blocks=16384, version=2 00:09:32.437 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:32.437 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:33.449 Discarding blocks...Done. 00:09:33.449 23:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:33.449 23:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:35.353 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:35.353 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:35.353 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:35.353 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:35.353 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:35.353 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:35.353 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2269238 00:09:35.353 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:35.353 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:35.353 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:35.353 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:35.353 00:09:35.353 real 0m2.974s 00:09:35.353 user 0m0.020s 00:09:35.353 sys 0m0.075s 00:09:35.353 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:35.353 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:35.353 ************************************ 00:09:35.353 END TEST filesystem_in_capsule_xfs 00:09:35.353 ************************************ 00:09:35.353 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:35.353 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:35.612 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:35.612 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:35.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.871 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:35.871 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:35.871 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:35.871 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.871 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:35.871 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.871 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:35.871 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.871 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.871 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:35.872 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.872 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:35.872 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2269238 00:09:35.872 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2269238 ']' 00:09:35.872 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2269238 00:09:35.872 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:35.872 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:35.872 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2269238 00:09:35.872 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:35.872 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:35.872 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2269238' 00:09:35.872 killing process with pid 2269238 00:09:35.872 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2269238 00:09:35.872 23:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2269238 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:39.162 00:09:39.162 real 0m15.520s 00:09:39.162 user 0m58.649s 00:09:39.162 sys 0m1.431s 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:39.162 ************************************ 00:09:39.162 END TEST nvmf_filesystem_in_capsule 00:09:39.162 ************************************ 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:39.162 rmmod nvme_tcp 00:09:39.162 rmmod nvme_fabrics 00:09:39.162 rmmod nvme_keyring 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:39.162 23:13:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.070 23:13:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:41.070 00:09:41.070 real 0m37.553s 00:09:41.070 user 1m54.540s 00:09:41.070 sys 0m6.757s 00:09:41.070 23:13:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:41.070 23:13:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.070 ************************************ 00:09:41.070 END TEST nvmf_filesystem 00:09:41.070 ************************************ 00:09:41.070 23:13:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:41.070 23:13:49 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:41.070 23:13:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:41.070 23:13:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.070 23:13:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:41.070 ************************************ 00:09:41.070 START TEST nvmf_target_discovery 00:09:41.070 ************************************ 00:09:41.070 23:13:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:41.070 * Looking for test storage... 00:09:41.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:41.070 23:13:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:41.070 23:13:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:41.070 23:13:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.070 23:13:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.070 23:13:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.070 23:13:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.070 23:13:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.070 23:13:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.070 23:13:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.070 23:13:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.070 23:13:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:41.070 23:13:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:46.342 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:46.342 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:46.342 Found net devices under 0000:86:00.0: cvl_0_0 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:46.342 Found net devices under 0000:86:00.1: cvl_0_1 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.342 23:13:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.342 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.342 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.342 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:46.342 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.342 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.342 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.342 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:46.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:09:46.342 00:09:46.342 --- 10.0.0.2 ping statistics --- 00:09:46.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.342 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:09:46.342 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:09:46.342 00:09:46.342 --- 10.0.0.1 ping statistics --- 00:09:46.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.342 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2275505 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2275505 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2275505 ']' 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:46.343 23:13:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.343 [2024-07-10 23:13:55.328362] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:46.343 [2024-07-10 23:13:55.328450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.343 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.601 [2024-07-10 23:13:55.434888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.601 [2024-07-10 23:13:55.647242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.601 [2024-07-10 23:13:55.647285] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.601 [2024-07-10 23:13:55.647297] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.601 [2024-07-10 23:13:55.647306] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.601 [2024-07-10 23:13:55.647335] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.601 [2024-07-10 23:13:55.647427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.601 [2024-07-10 23:13:55.647546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.601 [2024-07-10 23:13:55.647638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.601 [2024-07-10 23:13:55.647649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.193 [2024-07-10 23:13:56.157255] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.193 Null1 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.193 [2024-07-10 23:13:56.205548] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.193 Null2 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.193 Null3 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.193 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 Null4 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:09:47.452 00:09:47.452 Discovery Log Number of Records 6, Generation counter 6 00:09:47.452 =====Discovery Log Entry 0====== 00:09:47.452 trtype: tcp 00:09:47.452 adrfam: ipv4 00:09:47.452 subtype: current discovery subsystem 00:09:47.452 treq: not required 00:09:47.452 portid: 0 00:09:47.452 trsvcid: 4420 00:09:47.452 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:47.452 traddr: 10.0.0.2 00:09:47.452 eflags: explicit discovery connections, duplicate discovery information 00:09:47.452 sectype: none 00:09:47.452 =====Discovery Log Entry 1====== 00:09:47.452 trtype: tcp 00:09:47.452 adrfam: ipv4 00:09:47.452 subtype: nvme subsystem 00:09:47.452 treq: not required 00:09:47.452 portid: 0 00:09:47.452 trsvcid: 4420 00:09:47.452 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:47.452 traddr: 10.0.0.2 00:09:47.452 eflags: none 00:09:47.452 sectype: none 00:09:47.452 =====Discovery Log Entry 2====== 00:09:47.452 trtype: tcp 00:09:47.452 adrfam: ipv4 00:09:47.452 subtype: nvme subsystem 00:09:47.452 treq: not required 00:09:47.452 portid: 0 00:09:47.452 trsvcid: 4420 00:09:47.452 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:47.452 traddr: 10.0.0.2 00:09:47.452 eflags: none 00:09:47.452 sectype: none 00:09:47.452 =====Discovery Log Entry 3====== 00:09:47.452 trtype: tcp 00:09:47.452 adrfam: ipv4 00:09:47.452 subtype: nvme subsystem 00:09:47.452 treq: not required 00:09:47.452 portid: 0 00:09:47.452 trsvcid: 4420 00:09:47.452 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:47.452 traddr: 10.0.0.2 00:09:47.452 eflags: none 00:09:47.452 sectype: none 00:09:47.452 =====Discovery Log Entry 4====== 00:09:47.452 trtype: tcp 00:09:47.452 adrfam: ipv4 00:09:47.452 subtype: nvme subsystem 00:09:47.452 treq: not required 00:09:47.452 portid: 0 00:09:47.452 trsvcid: 4420 00:09:47.452 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:47.452 traddr: 10.0.0.2 00:09:47.452 eflags: none 00:09:47.452 sectype: none 00:09:47.452 =====Discovery Log Entry 5====== 00:09:47.452 trtype: tcp 00:09:47.452 adrfam: ipv4 00:09:47.452 subtype: discovery subsystem referral 00:09:47.452 treq: not required 00:09:47.452 portid: 0 00:09:47.452 trsvcid: 4430 00:09:47.452 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:47.452 traddr: 10.0.0.2 00:09:47.452 eflags: none 00:09:47.452 sectype: none 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:47.452 Perform nvmf subsystem discovery via RPC 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 [ 00:09:47.452 { 00:09:47.452 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:47.452 "subtype": "Discovery", 00:09:47.452 "listen_addresses": [ 00:09:47.452 { 00:09:47.452 "trtype": "TCP", 00:09:47.452 "adrfam": "IPv4", 00:09:47.452 "traddr": "10.0.0.2", 00:09:47.452 "trsvcid": "4420" 00:09:47.452 } 00:09:47.452 ], 00:09:47.452 "allow_any_host": true, 00:09:47.452 "hosts": [] 00:09:47.452 }, 00:09:47.452 { 00:09:47.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.452 "subtype": "NVMe", 00:09:47.452 "listen_addresses": [ 00:09:47.452 { 00:09:47.452 "trtype": "TCP", 00:09:47.452 "adrfam": "IPv4", 00:09:47.452 "traddr": "10.0.0.2", 00:09:47.452 "trsvcid": "4420" 00:09:47.452 } 00:09:47.452 ], 00:09:47.452 "allow_any_host": true, 00:09:47.452 "hosts": [], 00:09:47.452 "serial_number": "SPDK00000000000001", 00:09:47.452 "model_number": "SPDK bdev Controller", 00:09:47.452 "max_namespaces": 32, 00:09:47.452 "min_cntlid": 1, 00:09:47.452 "max_cntlid": 65519, 00:09:47.452 "namespaces": [ 00:09:47.452 { 00:09:47.452 "nsid": 1, 00:09:47.452 "bdev_name": "Null1", 00:09:47.452 "name": "Null1", 00:09:47.452 "nguid": "81BFD14F69DD4AD4BC83484C26FB0A0E", 00:09:47.452 "uuid": "81bfd14f-69dd-4ad4-bc83-484c26fb0a0e" 00:09:47.452 } 00:09:47.452 ] 00:09:47.452 }, 00:09:47.452 { 00:09:47.452 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:47.452 "subtype": "NVMe", 00:09:47.452 "listen_addresses": [ 00:09:47.452 { 00:09:47.452 "trtype": "TCP", 00:09:47.452 "adrfam": "IPv4", 00:09:47.452 "traddr": "10.0.0.2", 00:09:47.452 "trsvcid": "4420" 00:09:47.452 } 00:09:47.452 ], 00:09:47.452 "allow_any_host": true, 00:09:47.452 "hosts": [], 00:09:47.452 "serial_number": "SPDK00000000000002", 00:09:47.452 "model_number": "SPDK bdev Controller", 00:09:47.452 "max_namespaces": 32, 00:09:47.452 "min_cntlid": 1, 00:09:47.452 "max_cntlid": 65519, 00:09:47.452 "namespaces": [ 00:09:47.452 { 00:09:47.452 "nsid": 1, 00:09:47.452 "bdev_name": "Null2", 00:09:47.452 "name": "Null2", 00:09:47.452 "nguid": "96120A720EDE4FFABA6B3DACC65C523F", 00:09:47.452 "uuid": "96120a72-0ede-4ffa-ba6b-3dacc65c523f" 00:09:47.452 } 00:09:47.452 ] 00:09:47.452 }, 00:09:47.452 { 00:09:47.452 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:47.452 "subtype": "NVMe", 00:09:47.452 "listen_addresses": [ 00:09:47.452 { 00:09:47.452 "trtype": "TCP", 00:09:47.452 "adrfam": "IPv4", 00:09:47.452 "traddr": "10.0.0.2", 00:09:47.452 "trsvcid": "4420" 00:09:47.452 } 00:09:47.452 ], 00:09:47.452 "allow_any_host": true, 00:09:47.452 "hosts": [], 00:09:47.452 "serial_number": "SPDK00000000000003", 00:09:47.452 "model_number": "SPDK bdev Controller", 00:09:47.452 "max_namespaces": 32, 00:09:47.452 "min_cntlid": 1, 00:09:47.452 "max_cntlid": 65519, 00:09:47.452 "namespaces": [ 00:09:47.452 { 00:09:47.452 "nsid": 1, 00:09:47.452 "bdev_name": "Null3", 00:09:47.452 "name": "Null3", 00:09:47.452 "nguid": "2E54383D122644609FD8CE7E4EDEC6F2", 00:09:47.452 "uuid": "2e54383d-1226-4460-9fd8-ce7e4edec6f2" 00:09:47.452 } 00:09:47.452 ] 00:09:47.452 }, 00:09:47.452 { 00:09:47.452 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:47.452 "subtype": "NVMe", 00:09:47.452 "listen_addresses": [ 00:09:47.452 { 00:09:47.452 "trtype": "TCP", 00:09:47.452 "adrfam": "IPv4", 00:09:47.452 "traddr": "10.0.0.2", 00:09:47.452 "trsvcid": "4420" 00:09:47.452 } 00:09:47.452 ], 00:09:47.452 "allow_any_host": true, 00:09:47.452 "hosts": [], 00:09:47.452 "serial_number": "SPDK00000000000004", 00:09:47.452 "model_number": "SPDK bdev Controller", 00:09:47.452 "max_namespaces": 32, 00:09:47.452 "min_cntlid": 1, 00:09:47.452 "max_cntlid": 65519, 00:09:47.452 "namespaces": [ 00:09:47.452 { 00:09:47.452 "nsid": 1, 00:09:47.452 "bdev_name": "Null4", 00:09:47.452 "name": "Null4", 00:09:47.452 "nguid": "FF4F8AC2A77E45DD9490359566F73F30", 00:09:47.452 "uuid": "ff4f8ac2-a77e-45dd-9490-359566f73f30" 00:09:47.452 } 00:09:47.452 ] 00:09:47.452 } 00:09:47.452 ] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.452 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.453 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:47.453 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:47.453 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.453 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.710 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.710 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:47.710 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:47.710 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:47.710 23:13:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:47.710 23:13:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:47.710 23:13:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:47.710 23:13:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:47.710 23:13:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:47.711 23:13:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:47.711 23:13:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:47.711 rmmod nvme_tcp 00:09:47.711 rmmod nvme_fabrics 00:09:47.711 rmmod nvme_keyring 00:09:47.711 23:13:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:47.711 23:13:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:47.711 23:13:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:47.711 23:13:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2275505 ']' 00:09:47.711 23:13:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2275505 00:09:47.711 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2275505 ']' 00:09:47.711 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2275505 00:09:47.711 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:09:47.711 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:47.711 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2275505 00:09:47.711 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:47.711 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:47.711 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2275505' 00:09:47.711 killing process with pid 2275505 00:09:47.711 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2275505 00:09:47.711 23:13:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2275505 00:09:49.086 23:13:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:49.086 23:13:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:49.086 23:13:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:49.086 23:13:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:49.086 23:13:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:49.086 23:13:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.086 23:13:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.086 23:13:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.988 23:14:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:51.247 00:09:51.247 real 0m10.154s 00:09:51.247 user 0m9.519s 00:09:51.247 sys 0m4.388s 00:09:51.247 23:14:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.247 23:14:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.247 ************************************ 00:09:51.247 END TEST nvmf_target_discovery 00:09:51.247 ************************************ 00:09:51.247 23:14:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:51.247 23:14:00 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:51.247 23:14:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:51.247 23:14:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.247 23:14:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:51.247 ************************************ 00:09:51.247 START TEST nvmf_referrals 00:09:51.247 ************************************ 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:51.247 * Looking for test storage... 00:09:51.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:51.247 23:14:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:56.520 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:56.521 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:56.521 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:56.521 Found net devices under 0000:86:00.0: cvl_0_0 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:56.521 Found net devices under 0000:86:00.1: cvl_0_1 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:56.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:09:56.521 00:09:56.521 --- 10.0.0.2 ping statistics --- 00:09:56.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.521 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:09:56.521 00:09:56.521 --- 10.0.0.1 ping statistics --- 00:09:56.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.521 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2279290 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2279290 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2279290 ']' 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.521 23:14:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:56.521 [2024-07-10 23:14:05.527026] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:56.521 [2024-07-10 23:14:05.527128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.521 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.780 [2024-07-10 23:14:05.635642] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.038 [2024-07-10 23:14:05.860740] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.038 [2024-07-10 23:14:05.860783] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.038 [2024-07-10 23:14:05.860794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.038 [2024-07-10 23:14:05.860803] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.038 [2024-07-10 23:14:05.860829] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.038 [2024-07-10 23:14:05.860898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.038 [2024-07-10 23:14:05.860972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.038 [2024-07-10 23:14:05.861034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.038 [2024-07-10 23:14:05.861045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.297 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.297 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:09:57.297 23:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:57.297 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:57.297 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.297 23:14:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.297 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.297 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.297 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.297 [2024-07-10 23:14:06.359014] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.556 [2024-07-10 23:14:06.375209] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:57.556 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.815 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:57.816 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.816 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.816 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.816 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:57.816 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:57.816 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:57.816 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.816 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.816 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:57.816 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:57.816 23:14:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.816 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:57.816 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:57.816 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:57.816 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:58.074 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:58.074 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:58.074 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:58.074 23:14:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:58.074 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:58.074 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:58.074 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:58.074 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:58.074 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:58.074 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:58.074 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:58.074 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:58.074 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:58.074 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:58.074 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:58.074 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:58.074 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:58.332 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:58.332 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:58.333 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:58.635 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:58.635 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:58.635 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:58.635 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:58.635 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:58.635 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:58.635 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:58.635 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:58.635 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:58.635 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:58.635 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:58.635 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:58.635 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:58.899 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:58.899 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:58.899 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.899 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:58.899 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.899 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:58.899 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:58.899 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.899 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:58.899 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.899 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:58.900 rmmod nvme_tcp 00:09:58.900 rmmod nvme_fabrics 00:09:58.900 rmmod nvme_keyring 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2279290 ']' 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2279290 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2279290 ']' 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2279290 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2279290 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2279290' 00:09:58.900 killing process with pid 2279290 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2279290 00:09:58.900 23:14:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2279290 00:10:00.276 23:14:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:00.276 23:14:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:00.276 23:14:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:00.276 23:14:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:00.276 23:14:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:00.276 23:14:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.276 23:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:00.276 23:14:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.813 23:14:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:02.813 00:10:02.813 real 0m11.213s 00:10:02.813 user 0m14.416s 00:10:02.813 sys 0m4.682s 00:10:02.813 23:14:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:02.813 23:14:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.813 ************************************ 00:10:02.813 END TEST nvmf_referrals 00:10:02.813 ************************************ 00:10:02.813 23:14:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:02.813 23:14:11 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:02.813 23:14:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:02.813 23:14:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.813 23:14:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:02.813 ************************************ 00:10:02.813 START TEST nvmf_connect_disconnect 00:10:02.813 ************************************ 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:02.813 * Looking for test storage... 00:10:02.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:10:02.813 23:14:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:07.022 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:07.022 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:07.022 Found net devices under 0000:86:00.0: cvl_0_0 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:07.022 Found net devices under 0000:86:00.1: cvl_0_1 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.022 23:14:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.022 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.022 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:07.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:10:07.282 00:10:07.282 --- 10.0.0.2 ping statistics --- 00:10:07.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.282 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:10:07.282 00:10:07.282 --- 10.0.0.1 ping statistics --- 00:10:07.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.282 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2283369 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2283369 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2283369 ']' 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.282 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:07.283 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.283 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:07.283 23:14:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:07.283 [2024-07-10 23:14:16.329840] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:10:07.283 [2024-07-10 23:14:16.329923] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.542 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.542 [2024-07-10 23:14:16.439617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.801 [2024-07-10 23:14:16.649761] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.801 [2024-07-10 23:14:16.649807] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.801 [2024-07-10 23:14:16.649819] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.801 [2024-07-10 23:14:16.649844] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.801 [2024-07-10 23:14:16.649854] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.801 [2024-07-10 23:14:16.649991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.801 [2024-07-10 23:14:16.650069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.801 [2024-07-10 23:14:16.650122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.801 [2024-07-10 23:14:16.650133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.060 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:08.060 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:10:08.060 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:08.060 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:08.060 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:08.319 [2024-07-10 23:14:17.142887] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:08.319 [2024-07-10 23:14:17.258587] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:10:08.319 23:14:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:10.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:02.145 rmmod nvme_tcp 00:14:02.145 rmmod nvme_fabrics 00:14:02.145 rmmod nvme_keyring 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2283369 ']' 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2283369 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2283369 ']' 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2283369 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:02.145 23:18:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2283369 00:14:02.145 23:18:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:02.145 23:18:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:02.145 23:18:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2283369' 00:14:02.145 killing process with pid 2283369 00:14:02.145 23:18:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2283369 00:14:02.145 23:18:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2283369 00:14:03.523 23:18:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:03.523 23:18:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:03.523 23:18:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:03.523 23:18:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:03.523 23:18:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:03.523 23:18:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.523 23:18:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.523 23:18:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.057 23:18:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:06.057 00:14:06.057 real 4m3.235s 00:14:06.057 user 15m33.789s 00:14:06.057 sys 0m20.166s 00:14:06.057 23:18:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:06.057 23:18:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:06.057 ************************************ 00:14:06.057 END TEST nvmf_connect_disconnect 00:14:06.057 ************************************ 00:14:06.057 23:18:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:06.057 23:18:14 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:06.057 23:18:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:06.057 23:18:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:06.057 23:18:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:06.057 ************************************ 00:14:06.057 START TEST nvmf_multitarget 00:14:06.057 ************************************ 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:06.057 * Looking for test storage... 00:14:06.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:14:06.057 23:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:11.333 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:11.333 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:11.333 Found net devices under 0000:86:00.0: cvl_0_0 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:11.333 Found net devices under 0000:86:00.1: cvl_0_1 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:11.333 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:11.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:14:11.334 00:14:11.334 --- 10.0.0.2 ping statistics --- 00:14:11.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.334 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:11.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:14:11.334 00:14:11.334 --- 10.0.0.1 ping statistics --- 00:14:11.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.334 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2328030 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2328030 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2328030 ']' 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.334 23:18:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:11.334 [2024-07-10 23:18:19.970518] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:14:11.334 [2024-07-10 23:18:19.970606] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.334 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.334 [2024-07-10 23:18:20.081196] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.334 [2024-07-10 23:18:20.294988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.334 [2024-07-10 23:18:20.295032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.334 [2024-07-10 23:18:20.295044] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.334 [2024-07-10 23:18:20.295052] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.334 [2024-07-10 23:18:20.295061] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.334 [2024-07-10 23:18:20.295137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.334 [2024-07-10 23:18:20.295217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.334 [2024-07-10 23:18:20.295241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.334 [2024-07-10 23:18:20.295251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.903 23:18:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.903 23:18:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:14:11.903 23:18:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.903 23:18:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:11.903 23:18:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:11.903 23:18:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.903 23:18:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:11.903 23:18:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:11.903 23:18:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:11.903 23:18:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:11.903 23:18:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:11.903 "nvmf_tgt_1" 00:14:12.163 23:18:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:12.163 "nvmf_tgt_2" 00:14:12.163 23:18:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:12.163 23:18:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:12.163 23:18:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:12.163 23:18:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:12.422 true 00:14:12.422 23:18:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:12.422 true 00:14:12.422 23:18:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:12.422 23:18:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:12.681 rmmod nvme_tcp 00:14:12.681 rmmod nvme_fabrics 00:14:12.681 rmmod nvme_keyring 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2328030 ']' 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2328030 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2328030 ']' 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2328030 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2328030 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2328030' 00:14:12.681 killing process with pid 2328030 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2328030 00:14:12.681 23:18:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2328030 00:14:14.059 23:18:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:14.059 23:18:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:14.059 23:18:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:14.059 23:18:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:14.059 23:18:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:14.059 23:18:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.059 23:18:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.059 23:18:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.968 23:18:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:15.968 00:14:15.968 real 0m10.298s 00:14:15.968 user 0m11.673s 00:14:15.968 sys 0m4.350s 00:14:15.968 23:18:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:15.968 23:18:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:15.968 ************************************ 00:14:15.968 END TEST nvmf_multitarget 00:14:15.968 ************************************ 00:14:16.228 23:18:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:16.228 23:18:25 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:16.228 23:18:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:16.228 23:18:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:16.228 23:18:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:16.228 ************************************ 00:14:16.228 START TEST nvmf_rpc 00:14:16.228 ************************************ 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:16.228 * Looking for test storage... 00:14:16.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.228 23:18:25 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:14:16.229 23:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:21.572 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:21.572 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:21.572 Found net devices under 0000:86:00.0: cvl_0_0 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:21.572 Found net devices under 0000:86:00.1: cvl_0_1 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:21.572 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:21.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:14:21.573 00:14:21.573 --- 10.0.0.2 ping statistics --- 00:14:21.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.573 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:14:21.573 00:14:21.573 --- 10.0.0.1 ping statistics --- 00:14:21.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.573 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2331889 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2331889 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2331889 ']' 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:21.573 23:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.833 [2024-07-10 23:18:30.642968] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:14:21.833 [2024-07-10 23:18:30.643061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.833 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.833 [2024-07-10 23:18:30.752758] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:22.092 [2024-07-10 23:18:30.970883] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.092 [2024-07-10 23:18:30.970927] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.092 [2024-07-10 23:18:30.970938] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.092 [2024-07-10 23:18:30.970947] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.092 [2024-07-10 23:18:30.970956] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.092 [2024-07-10 23:18:30.971045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.092 [2024-07-10 23:18:30.971143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.092 [2024-07-10 23:18:30.971184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.092 [2024-07-10 23:18:30.971194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:22.659 "tick_rate": 2300000000, 00:14:22.659 "poll_groups": [ 00:14:22.659 { 00:14:22.659 "name": "nvmf_tgt_poll_group_000", 00:14:22.659 "admin_qpairs": 0, 00:14:22.659 "io_qpairs": 0, 00:14:22.659 "current_admin_qpairs": 0, 00:14:22.659 "current_io_qpairs": 0, 00:14:22.659 "pending_bdev_io": 0, 00:14:22.659 "completed_nvme_io": 0, 00:14:22.659 "transports": [] 00:14:22.659 }, 00:14:22.659 { 00:14:22.659 "name": "nvmf_tgt_poll_group_001", 00:14:22.659 "admin_qpairs": 0, 00:14:22.659 "io_qpairs": 0, 00:14:22.659 "current_admin_qpairs": 0, 00:14:22.659 "current_io_qpairs": 0, 00:14:22.659 "pending_bdev_io": 0, 00:14:22.659 "completed_nvme_io": 0, 00:14:22.659 "transports": [] 00:14:22.659 }, 00:14:22.659 { 00:14:22.659 "name": "nvmf_tgt_poll_group_002", 00:14:22.659 "admin_qpairs": 0, 00:14:22.659 "io_qpairs": 0, 00:14:22.659 "current_admin_qpairs": 0, 00:14:22.659 "current_io_qpairs": 0, 00:14:22.659 "pending_bdev_io": 0, 00:14:22.659 "completed_nvme_io": 0, 00:14:22.659 "transports": [] 00:14:22.659 }, 00:14:22.659 { 00:14:22.659 "name": "nvmf_tgt_poll_group_003", 00:14:22.659 "admin_qpairs": 0, 00:14:22.659 "io_qpairs": 0, 00:14:22.659 "current_admin_qpairs": 0, 00:14:22.659 "current_io_qpairs": 0, 00:14:22.659 "pending_bdev_io": 0, 00:14:22.659 "completed_nvme_io": 0, 00:14:22.659 "transports": [] 00:14:22.659 } 00:14:22.659 ] 00:14:22.659 }' 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.659 [2024-07-10 23:18:31.579323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:22.659 "tick_rate": 2300000000, 00:14:22.659 "poll_groups": [ 00:14:22.659 { 00:14:22.659 "name": "nvmf_tgt_poll_group_000", 00:14:22.659 "admin_qpairs": 0, 00:14:22.659 "io_qpairs": 0, 00:14:22.659 "current_admin_qpairs": 0, 00:14:22.659 "current_io_qpairs": 0, 00:14:22.659 "pending_bdev_io": 0, 00:14:22.659 "completed_nvme_io": 0, 00:14:22.659 "transports": [ 00:14:22.659 { 00:14:22.659 "trtype": "TCP" 00:14:22.659 } 00:14:22.659 ] 00:14:22.659 }, 00:14:22.659 { 00:14:22.659 "name": "nvmf_tgt_poll_group_001", 00:14:22.659 "admin_qpairs": 0, 00:14:22.659 "io_qpairs": 0, 00:14:22.659 "current_admin_qpairs": 0, 00:14:22.659 "current_io_qpairs": 0, 00:14:22.659 "pending_bdev_io": 0, 00:14:22.659 "completed_nvme_io": 0, 00:14:22.659 "transports": [ 00:14:22.659 { 00:14:22.659 "trtype": "TCP" 00:14:22.659 } 00:14:22.659 ] 00:14:22.659 }, 00:14:22.659 { 00:14:22.659 "name": "nvmf_tgt_poll_group_002", 00:14:22.659 "admin_qpairs": 0, 00:14:22.659 "io_qpairs": 0, 00:14:22.659 "current_admin_qpairs": 0, 00:14:22.659 "current_io_qpairs": 0, 00:14:22.659 "pending_bdev_io": 0, 00:14:22.659 "completed_nvme_io": 0, 00:14:22.659 "transports": [ 00:14:22.659 { 00:14:22.659 "trtype": "TCP" 00:14:22.659 } 00:14:22.659 ] 00:14:22.659 }, 00:14:22.659 { 00:14:22.659 "name": "nvmf_tgt_poll_group_003", 00:14:22.659 "admin_qpairs": 0, 00:14:22.659 "io_qpairs": 0, 00:14:22.659 "current_admin_qpairs": 0, 00:14:22.659 "current_io_qpairs": 0, 00:14:22.659 "pending_bdev_io": 0, 00:14:22.659 "completed_nvme_io": 0, 00:14:22.659 "transports": [ 00:14:22.659 { 00:14:22.659 "trtype": "TCP" 00:14:22.659 } 00:14:22.659 ] 00:14:22.659 } 00:14:22.659 ] 00:14:22.659 }' 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:22.659 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:22.660 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:22.660 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:22.660 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:22.660 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:22.660 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:22.660 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:22.660 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:22.660 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.660 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.920 Malloc1 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.920 [2024-07-10 23:18:31.821324] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:22.920 [2024-07-10 23:18:31.850637] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:14:22.920 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:22.920 could not add new controller: failed to write to nvme-fabrics device 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.920 23:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:24.300 23:18:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:24.300 23:18:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:24.300 23:18:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:24.300 23:18:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:24.300 23:18:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:26.204 23:18:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:26.204 23:18:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:26.204 23:18:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:26.204 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:26.204 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:26.204 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:26.204 23:18:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:26.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.204 23:18:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:26.204 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:26.204 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:26.204 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:26.463 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:26.463 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:26.463 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:26.463 23:18:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:26.463 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.463 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.463 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.463 23:18:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:26.463 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:26.464 [2024-07-10 23:18:35.316605] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:14:26.464 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:26.464 could not add new controller: failed to write to nvme-fabrics device 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.464 23:18:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:27.842 23:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:27.842 23:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:27.842 23:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:27.842 23:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:27.842 23:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:29.746 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:29.746 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:29.746 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:29.746 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:29.746 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:29.746 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:29.746 23:18:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.746 23:18:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:29.746 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:29.746 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:29.746 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.746 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:29.746 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.746 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:29.746 23:18:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.746 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.746 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.005 [2024-07-10 23:18:38.845472] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.005 23:18:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:30.939 23:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:30.939 23:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:30.939 23:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:30.939 23:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:30.939 23:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:33.469 23:18:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:33.469 23:18:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:33.469 23:18:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:33.469 23:18:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:33.469 23:18:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:33.469 23:18:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:33.469 23:18:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:33.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.469 [2024-07-10 23:18:42.286483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.469 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.470 23:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.470 23:18:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:34.407 23:18:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:34.407 23:18:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:34.407 23:18:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:34.407 23:18:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:34.407 23:18:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:36.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.940 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.940 [2024-07-10 23:18:45.762559] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.941 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.941 23:18:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:36.941 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.941 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.941 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.941 23:18:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:36.941 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.941 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.941 23:18:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.941 23:18:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:38.320 23:18:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:38.320 23:18:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:38.320 23:18:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:38.320 23:18:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:38.320 23:18:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:40.225 23:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:40.225 23:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:40.225 23:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:40.225 23:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:40.225 23:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:40.225 23:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:40.225 23:18:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:40.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.225 23:18:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:40.225 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:40.225 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:40.225 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:40.225 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:40.225 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:40.483 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:40.483 23:18:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:40.483 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.483 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.483 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.484 [2024-07-10 23:18:49.333829] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.484 23:18:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:41.421 23:18:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:41.421 23:18:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:41.421 23:18:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:41.421 23:18:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:41.421 23:18:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:43.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.957 [2024-07-10 23:18:52.808874] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.957 23:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:45.334 23:18:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:45.334 23:18:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:45.334 23:18:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:45.334 23:18:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:45.334 23:18:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:47.240 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:47.240 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:47.240 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:47.240 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:47.240 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:47.240 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:47.240 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 [2024-07-10 23:18:56.390199] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 [2024-07-10 23:18:56.438321] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 [2024-07-10 23:18:56.490496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 [2024-07-10 23:18:56.538666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.530 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.531 [2024-07-10 23:18:56.586860] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.531 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:47.790 "tick_rate": 2300000000, 00:14:47.790 "poll_groups": [ 00:14:47.790 { 00:14:47.790 "name": "nvmf_tgt_poll_group_000", 00:14:47.790 "admin_qpairs": 2, 00:14:47.790 "io_qpairs": 168, 00:14:47.790 "current_admin_qpairs": 0, 00:14:47.790 "current_io_qpairs": 0, 00:14:47.790 "pending_bdev_io": 0, 00:14:47.790 "completed_nvme_io": 267, 00:14:47.790 "transports": [ 00:14:47.790 { 00:14:47.790 "trtype": "TCP" 00:14:47.790 } 00:14:47.790 ] 00:14:47.790 }, 00:14:47.790 { 00:14:47.790 "name": "nvmf_tgt_poll_group_001", 00:14:47.790 "admin_qpairs": 2, 00:14:47.790 "io_qpairs": 168, 00:14:47.790 "current_admin_qpairs": 0, 00:14:47.790 "current_io_qpairs": 0, 00:14:47.790 "pending_bdev_io": 0, 00:14:47.790 "completed_nvme_io": 268, 00:14:47.790 "transports": [ 00:14:47.790 { 00:14:47.790 "trtype": "TCP" 00:14:47.790 } 00:14:47.790 ] 00:14:47.790 }, 00:14:47.790 { 00:14:47.790 "name": "nvmf_tgt_poll_group_002", 00:14:47.790 "admin_qpairs": 1, 00:14:47.790 "io_qpairs": 168, 00:14:47.790 "current_admin_qpairs": 0, 00:14:47.790 "current_io_qpairs": 0, 00:14:47.790 "pending_bdev_io": 0, 00:14:47.790 "completed_nvme_io": 220, 00:14:47.790 "transports": [ 00:14:47.790 { 00:14:47.790 "trtype": "TCP" 00:14:47.790 } 00:14:47.790 ] 00:14:47.790 }, 00:14:47.790 { 00:14:47.790 "name": "nvmf_tgt_poll_group_003", 00:14:47.790 "admin_qpairs": 2, 00:14:47.790 "io_qpairs": 168, 00:14:47.790 "current_admin_qpairs": 0, 00:14:47.790 "current_io_qpairs": 0, 00:14:47.790 "pending_bdev_io": 0, 00:14:47.790 "completed_nvme_io": 267, 00:14:47.790 "transports": [ 00:14:47.790 { 00:14:47.790 "trtype": "TCP" 00:14:47.790 } 00:14:47.790 ] 00:14:47.790 } 00:14:47.790 ] 00:14:47.790 }' 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:47.790 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:47.791 rmmod nvme_tcp 00:14:47.791 rmmod nvme_fabrics 00:14:47.791 rmmod nvme_keyring 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2331889 ']' 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2331889 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2331889 ']' 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2331889 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2331889 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2331889' 00:14:47.791 killing process with pid 2331889 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2331889 00:14:47.791 23:18:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2331889 00:14:49.696 23:18:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:49.696 23:18:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:49.696 23:18:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:49.696 23:18:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:49.696 23:18:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:49.696 23:18:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.696 23:18:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.696 23:18:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.599 23:19:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:51.599 00:14:51.599 real 0m35.341s 00:14:51.599 user 1m48.654s 00:14:51.599 sys 0m6.003s 00:14:51.599 23:19:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:51.599 23:19:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.600 ************************************ 00:14:51.600 END TEST nvmf_rpc 00:14:51.600 ************************************ 00:14:51.600 23:19:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:51.600 23:19:00 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:51.600 23:19:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:51.600 23:19:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:51.600 23:19:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:51.600 ************************************ 00:14:51.600 START TEST nvmf_invalid 00:14:51.600 ************************************ 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:51.600 * Looking for test storage... 00:14:51.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:14:51.600 23:19:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:56.877 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:56.877 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:56.877 Found net devices under 0000:86:00.0: cvl_0_0 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:56.877 Found net devices under 0000:86:00.1: cvl_0_1 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:56.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:14:56.877 00:14:56.877 --- 10.0.0.2 ping statistics --- 00:14:56.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.877 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:14:56.877 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:56.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:14:56.877 00:14:56.877 --- 10.0.0.1 ping statistics --- 00:14:56.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.878 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2340057 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2340057 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2340057 ']' 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.878 23:19:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:56.878 [2024-07-10 23:19:05.881082] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:14:56.878 [2024-07-10 23:19:05.881188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.878 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.137 [2024-07-10 23:19:05.992528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.396 [2024-07-10 23:19:06.223301] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.396 [2024-07-10 23:19:06.223346] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.396 [2024-07-10 23:19:06.223357] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.396 [2024-07-10 23:19:06.223366] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.396 [2024-07-10 23:19:06.223375] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.397 [2024-07-10 23:19:06.223454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.397 [2024-07-10 23:19:06.223522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.397 [2024-07-10 23:19:06.223580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.397 [2024-07-10 23:19:06.223591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.656 23:19:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.656 23:19:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:14:57.656 23:19:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.656 23:19:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:57.656 23:19:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:57.656 23:19:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.656 23:19:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:57.656 23:19:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27732 00:14:57.914 [2024-07-10 23:19:06.853870] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:57.915 23:19:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:57.915 { 00:14:57.915 "nqn": "nqn.2016-06.io.spdk:cnode27732", 00:14:57.915 "tgt_name": "foobar", 00:14:57.915 "method": "nvmf_create_subsystem", 00:14:57.915 "req_id": 1 00:14:57.915 } 00:14:57.915 Got JSON-RPC error response 00:14:57.915 response: 00:14:57.915 { 00:14:57.915 "code": -32603, 00:14:57.915 "message": "Unable to find target foobar" 00:14:57.915 }' 00:14:57.915 23:19:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:57.915 { 00:14:57.915 "nqn": "nqn.2016-06.io.spdk:cnode27732", 00:14:57.915 "tgt_name": "foobar", 00:14:57.915 "method": "nvmf_create_subsystem", 00:14:57.915 "req_id": 1 00:14:57.915 } 00:14:57.915 Got JSON-RPC error response 00:14:57.915 response: 00:14:57.915 { 00:14:57.915 "code": -32603, 00:14:57.915 "message": "Unable to find target foobar" 00:14:57.915 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:57.915 23:19:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:57.915 23:19:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17343 00:14:58.173 [2024-07-10 23:19:07.046548] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17343: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:58.173 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:58.173 { 00:14:58.173 "nqn": "nqn.2016-06.io.spdk:cnode17343", 00:14:58.173 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:58.173 "method": "nvmf_create_subsystem", 00:14:58.174 "req_id": 1 00:14:58.174 } 00:14:58.174 Got JSON-RPC error response 00:14:58.174 response: 00:14:58.174 { 00:14:58.174 "code": -32602, 00:14:58.174 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:58.174 }' 00:14:58.174 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:58.174 { 00:14:58.174 "nqn": "nqn.2016-06.io.spdk:cnode17343", 00:14:58.174 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:58.174 "method": "nvmf_create_subsystem", 00:14:58.174 "req_id": 1 00:14:58.174 } 00:14:58.174 Got JSON-RPC error response 00:14:58.174 response: 00:14:58.174 { 00:14:58.174 "code": -32602, 00:14:58.174 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:58.174 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:58.174 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:58.174 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode8632 00:14:58.433 [2024-07-10 23:19:07.243213] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8632: invalid model number 'SPDK_Controller' 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:58.433 { 00:14:58.433 "nqn": "nqn.2016-06.io.spdk:cnode8632", 00:14:58.433 "model_number": "SPDK_Controller\u001f", 00:14:58.433 "method": "nvmf_create_subsystem", 00:14:58.433 "req_id": 1 00:14:58.433 } 00:14:58.433 Got JSON-RPC error response 00:14:58.433 response: 00:14:58.433 { 00:14:58.433 "code": -32602, 00:14:58.433 "message": "Invalid MN SPDK_Controller\u001f" 00:14:58.433 }' 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:58.433 { 00:14:58.433 "nqn": "nqn.2016-06.io.spdk:cnode8632", 00:14:58.433 "model_number": "SPDK_Controller\u001f", 00:14:58.433 "method": "nvmf_create_subsystem", 00:14:58.433 "req_id": 1 00:14:58.433 } 00:14:58.433 Got JSON-RPC error response 00:14:58.433 response: 00:14:58.433 { 00:14:58.433 "code": -32602, 00:14:58.433 "message": "Invalid MN SPDK_Controller\u001f" 00:14:58.433 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.433 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ > == \- ]] 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '>:y)^Uc%uvJ}DIihfT&oX' 00:14:58.434 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '>:y)^Uc%uvJ}DIihfT&oX' nqn.2016-06.io.spdk:cnode5767 00:14:58.694 [2024-07-10 23:19:07.568346] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5767: invalid serial number '>:y)^Uc%uvJ}DIihfT&oX' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:58.694 { 00:14:58.694 "nqn": "nqn.2016-06.io.spdk:cnode5767", 00:14:58.694 "serial_number": ">:y)^Uc%uvJ}DIihfT&oX", 00:14:58.694 "method": "nvmf_create_subsystem", 00:14:58.694 "req_id": 1 00:14:58.694 } 00:14:58.694 Got JSON-RPC error response 00:14:58.694 response: 00:14:58.694 { 00:14:58.694 "code": -32602, 00:14:58.694 "message": "Invalid SN >:y)^Uc%uvJ}DIihfT&oX" 00:14:58.694 }' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:58.694 { 00:14:58.694 "nqn": "nqn.2016-06.io.spdk:cnode5767", 00:14:58.694 "serial_number": ">:y)^Uc%uvJ}DIihfT&oX", 00:14:58.694 "method": "nvmf_create_subsystem", 00:14:58.694 "req_id": 1 00:14:58.694 } 00:14:58.694 Got JSON-RPC error response 00:14:58.694 response: 00:14:58.694 { 00:14:58.694 "code": -32602, 00:14:58.694 "message": "Invalid SN >:y)^Uc%uvJ}DIihfT&oX" 00:14:58.694 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.694 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.695 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.954 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ! == \- ]] 00:14:58.955 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '!XRHhv7"G?Q{D&cQg%/+\ZF=Ns|;aP4wKn6y%z)2q' 00:14:58.955 23:19:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '!XRHhv7"G?Q{D&cQg%/+\ZF=Ns|;aP4wKn6y%z)2q' nqn.2016-06.io.spdk:cnode21280 00:14:58.955 [2024-07-10 23:19:08.009932] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21280: invalid model number '!XRHhv7"G?Q{D&cQg%/+\ZF=Ns|;aP4wKn6y%z)2q' 00:14:59.213 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:59.213 { 00:14:59.213 "nqn": "nqn.2016-06.io.spdk:cnode21280", 00:14:59.213 "model_number": "!XRHhv7\"G?Q{D&cQg%/+\\ZF=Ns|;aP4wKn6y%z)2q", 00:14:59.213 "method": "nvmf_create_subsystem", 00:14:59.213 "req_id": 1 00:14:59.213 } 00:14:59.213 Got JSON-RPC error response 00:14:59.213 response: 00:14:59.213 { 00:14:59.213 "code": -32602, 00:14:59.213 "message": "Invalid MN !XRHhv7\"G?Q{D&cQg%/+\\ZF=Ns|;aP4wKn6y%z)2q" 00:14:59.213 }' 00:14:59.213 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:59.213 { 00:14:59.213 "nqn": "nqn.2016-06.io.spdk:cnode21280", 00:14:59.213 "model_number": "!XRHhv7\"G?Q{D&cQg%/+\\ZF=Ns|;aP4wKn6y%z)2q", 00:14:59.213 "method": "nvmf_create_subsystem", 00:14:59.213 "req_id": 1 00:14:59.213 } 00:14:59.213 Got JSON-RPC error response 00:14:59.213 response: 00:14:59.213 { 00:14:59.213 "code": -32602, 00:14:59.213 "message": "Invalid MN !XRHhv7\"G?Q{D&cQg%/+\\ZF=Ns|;aP4wKn6y%z)2q" 00:14:59.213 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:59.213 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:59.213 [2024-07-10 23:19:08.198642] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.213 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:59.473 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:59.473 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:59.473 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:59.473 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:59.473 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:59.731 [2024-07-10 23:19:08.588005] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:59.731 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:59.731 { 00:14:59.731 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:59.731 "listen_address": { 00:14:59.731 "trtype": "tcp", 00:14:59.731 "traddr": "", 00:14:59.731 "trsvcid": "4421" 00:14:59.731 }, 00:14:59.731 "method": "nvmf_subsystem_remove_listener", 00:14:59.731 "req_id": 1 00:14:59.731 } 00:14:59.731 Got JSON-RPC error response 00:14:59.731 response: 00:14:59.731 { 00:14:59.731 "code": -32602, 00:14:59.731 "message": "Invalid parameters" 00:14:59.731 }' 00:14:59.731 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:59.731 { 00:14:59.731 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:59.731 "listen_address": { 00:14:59.731 "trtype": "tcp", 00:14:59.731 "traddr": "", 00:14:59.731 "trsvcid": "4421" 00:14:59.731 }, 00:14:59.731 "method": "nvmf_subsystem_remove_listener", 00:14:59.731 "req_id": 1 00:14:59.731 } 00:14:59.731 Got JSON-RPC error response 00:14:59.731 response: 00:14:59.731 { 00:14:59.731 "code": -32602, 00:14:59.731 "message": "Invalid parameters" 00:14:59.731 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:59.731 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode481 -i 0 00:14:59.731 [2024-07-10 23:19:08.780653] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode481: invalid cntlid range [0-65519] 00:14:59.988 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:59.988 { 00:14:59.988 "nqn": "nqn.2016-06.io.spdk:cnode481", 00:14:59.988 "min_cntlid": 0, 00:14:59.989 "method": "nvmf_create_subsystem", 00:14:59.989 "req_id": 1 00:14:59.989 } 00:14:59.989 Got JSON-RPC error response 00:14:59.989 response: 00:14:59.989 { 00:14:59.989 "code": -32602, 00:14:59.989 "message": "Invalid cntlid range [0-65519]" 00:14:59.989 }' 00:14:59.989 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:59.989 { 00:14:59.989 "nqn": "nqn.2016-06.io.spdk:cnode481", 00:14:59.989 "min_cntlid": 0, 00:14:59.989 "method": "nvmf_create_subsystem", 00:14:59.989 "req_id": 1 00:14:59.989 } 00:14:59.989 Got JSON-RPC error response 00:14:59.989 response: 00:14:59.989 { 00:14:59.989 "code": -32602, 00:14:59.989 "message": "Invalid cntlid range [0-65519]" 00:14:59.989 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:59.989 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2420 -i 65520 00:14:59.989 [2024-07-10 23:19:08.969333] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2420: invalid cntlid range [65520-65519] 00:14:59.989 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:59.989 { 00:14:59.989 "nqn": "nqn.2016-06.io.spdk:cnode2420", 00:14:59.989 "min_cntlid": 65520, 00:14:59.989 "method": "nvmf_create_subsystem", 00:14:59.989 "req_id": 1 00:14:59.989 } 00:14:59.989 Got JSON-RPC error response 00:14:59.989 response: 00:14:59.989 { 00:14:59.989 "code": -32602, 00:14:59.989 "message": "Invalid cntlid range [65520-65519]" 00:14:59.989 }' 00:14:59.989 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:59.989 { 00:14:59.989 "nqn": "nqn.2016-06.io.spdk:cnode2420", 00:14:59.989 "min_cntlid": 65520, 00:14:59.989 "method": "nvmf_create_subsystem", 00:14:59.989 "req_id": 1 00:14:59.989 } 00:14:59.989 Got JSON-RPC error response 00:14:59.989 response: 00:14:59.989 { 00:14:59.989 "code": -32602, 00:14:59.989 "message": "Invalid cntlid range [65520-65519]" 00:14:59.989 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:59.989 23:19:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10290 -I 0 00:15:00.248 [2024-07-10 23:19:09.157959] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10290: invalid cntlid range [1-0] 00:15:00.248 23:19:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:00.248 { 00:15:00.248 "nqn": "nqn.2016-06.io.spdk:cnode10290", 00:15:00.248 "max_cntlid": 0, 00:15:00.248 "method": "nvmf_create_subsystem", 00:15:00.248 "req_id": 1 00:15:00.248 } 00:15:00.248 Got JSON-RPC error response 00:15:00.248 response: 00:15:00.248 { 00:15:00.248 "code": -32602, 00:15:00.248 "message": "Invalid cntlid range [1-0]" 00:15:00.248 }' 00:15:00.248 23:19:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:00.248 { 00:15:00.248 "nqn": "nqn.2016-06.io.spdk:cnode10290", 00:15:00.248 "max_cntlid": 0, 00:15:00.248 "method": "nvmf_create_subsystem", 00:15:00.248 "req_id": 1 00:15:00.248 } 00:15:00.248 Got JSON-RPC error response 00:15:00.248 response: 00:15:00.248 { 00:15:00.248 "code": -32602, 00:15:00.248 "message": "Invalid cntlid range [1-0]" 00:15:00.248 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:00.248 23:19:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3823 -I 65520 00:15:00.507 [2024-07-10 23:19:09.346651] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3823: invalid cntlid range [1-65520] 00:15:00.507 23:19:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:00.507 { 00:15:00.507 "nqn": "nqn.2016-06.io.spdk:cnode3823", 00:15:00.507 "max_cntlid": 65520, 00:15:00.507 "method": "nvmf_create_subsystem", 00:15:00.507 "req_id": 1 00:15:00.507 } 00:15:00.507 Got JSON-RPC error response 00:15:00.507 response: 00:15:00.507 { 00:15:00.507 "code": -32602, 00:15:00.507 "message": "Invalid cntlid range [1-65520]" 00:15:00.507 }' 00:15:00.507 23:19:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:00.507 { 00:15:00.507 "nqn": "nqn.2016-06.io.spdk:cnode3823", 00:15:00.507 "max_cntlid": 65520, 00:15:00.507 "method": "nvmf_create_subsystem", 00:15:00.507 "req_id": 1 00:15:00.507 } 00:15:00.507 Got JSON-RPC error response 00:15:00.507 response: 00:15:00.507 { 00:15:00.507 "code": -32602, 00:15:00.507 "message": "Invalid cntlid range [1-65520]" 00:15:00.507 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:00.507 23:19:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26408 -i 6 -I 5 00:15:00.507 [2024-07-10 23:19:09.523271] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26408: invalid cntlid range [6-5] 00:15:00.507 23:19:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:00.507 { 00:15:00.507 "nqn": "nqn.2016-06.io.spdk:cnode26408", 00:15:00.507 "min_cntlid": 6, 00:15:00.507 "max_cntlid": 5, 00:15:00.507 "method": "nvmf_create_subsystem", 00:15:00.507 "req_id": 1 00:15:00.507 } 00:15:00.508 Got JSON-RPC error response 00:15:00.508 response: 00:15:00.508 { 00:15:00.508 "code": -32602, 00:15:00.508 "message": "Invalid cntlid range [6-5]" 00:15:00.508 }' 00:15:00.508 23:19:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:00.508 { 00:15:00.508 "nqn": "nqn.2016-06.io.spdk:cnode26408", 00:15:00.508 "min_cntlid": 6, 00:15:00.508 "max_cntlid": 5, 00:15:00.508 "method": "nvmf_create_subsystem", 00:15:00.508 "req_id": 1 00:15:00.508 } 00:15:00.508 Got JSON-RPC error response 00:15:00.508 response: 00:15:00.508 { 00:15:00.508 "code": -32602, 00:15:00.508 "message": "Invalid cntlid range [6-5]" 00:15:00.508 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:00.508 23:19:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:00.767 { 00:15:00.767 "name": "foobar", 00:15:00.767 "method": "nvmf_delete_target", 00:15:00.767 "req_id": 1 00:15:00.767 } 00:15:00.767 Got JSON-RPC error response 00:15:00.767 response: 00:15:00.767 { 00:15:00.767 "code": -32602, 00:15:00.767 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:00.767 }' 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:00.767 { 00:15:00.767 "name": "foobar", 00:15:00.767 "method": "nvmf_delete_target", 00:15:00.767 "req_id": 1 00:15:00.767 } 00:15:00.767 Got JSON-RPC error response 00:15:00.767 response: 00:15:00.767 { 00:15:00.767 "code": -32602, 00:15:00.767 "message": "The specified target doesn't exist, cannot delete it." 00:15:00.767 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:00.767 rmmod nvme_tcp 00:15:00.767 rmmod nvme_fabrics 00:15:00.767 rmmod nvme_keyring 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2340057 ']' 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2340057 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2340057 ']' 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2340057 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2340057 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2340057' 00:15:00.767 killing process with pid 2340057 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2340057 00:15:00.767 23:19:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2340057 00:15:02.143 23:19:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:02.143 23:19:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:02.143 23:19:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:02.143 23:19:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.144 23:19:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:02.144 23:19:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.144 23:19:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.144 23:19:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.052 23:19:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:04.052 00:15:04.052 real 0m12.633s 00:15:04.052 user 0m22.031s 00:15:04.052 sys 0m4.932s 00:15:04.052 23:19:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:04.312 23:19:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:04.312 ************************************ 00:15:04.312 END TEST nvmf_invalid 00:15:04.312 ************************************ 00:15:04.312 23:19:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:04.312 23:19:13 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:15:04.312 23:19:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:04.312 23:19:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.312 23:19:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:04.312 ************************************ 00:15:04.312 START TEST nvmf_abort 00:15:04.312 ************************************ 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:15:04.312 * Looking for test storage... 00:15:04.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.312 23:19:13 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:15:04.313 23:19:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:09.585 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:09.585 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:09.585 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:09.586 Found net devices under 0000:86:00.0: cvl_0_0 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:09.586 Found net devices under 0000:86:00.1: cvl_0_1 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:09.586 23:19:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:09.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:15:09.586 00:15:09.586 --- 10.0.0.2 ping statistics --- 00:15:09.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.586 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:09.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:15:09.586 00:15:09.586 --- 10.0.0.1 ping statistics --- 00:15:09.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.586 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2344511 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2344511 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2344511 ']' 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.586 23:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:09.586 [2024-07-10 23:19:18.346825] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:15:09.586 [2024-07-10 23:19:18.346911] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.586 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.586 [2024-07-10 23:19:18.455083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:09.846 [2024-07-10 23:19:18.668066] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.846 [2024-07-10 23:19:18.668115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.846 [2024-07-10 23:19:18.668129] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.846 [2024-07-10 23:19:18.668138] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.846 [2024-07-10 23:19:18.668147] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.846 [2024-07-10 23:19:18.668289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.846 [2024-07-10 23:19:18.668371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.846 [2024-07-10 23:19:18.668381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.106 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.106 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:15:10.106 23:19:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:10.106 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:10.106 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:10.106 23:19:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.106 23:19:19 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:15:10.106 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.106 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:10.106 [2024-07-10 23:19:19.170606] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:10.365 Malloc0 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:10.365 Delay0 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:10.365 [2024-07-10 23:19:19.315631] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.365 23:19:19 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:10.365 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.624 [2024-07-10 23:19:19.459068] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:13.159 Initializing NVMe Controllers 00:15:13.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:13.159 controller IO queue size 128 less than required 00:15:13.159 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:13.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:13.159 Initialization complete. Launching workers. 00:15:13.159 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 39052 00:15:13.159 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 39113, failed to submit 66 00:15:13.159 success 39052, unsuccess 61, failed 0 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:13.159 rmmod nvme_tcp 00:15:13.159 rmmod nvme_fabrics 00:15:13.159 rmmod nvme_keyring 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2344511 ']' 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2344511 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2344511 ']' 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2344511 00:15:13.159 23:19:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:15:13.160 23:19:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:13.160 23:19:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2344511 00:15:13.160 23:19:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:13.160 23:19:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:13.160 23:19:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2344511' 00:15:13.160 killing process with pid 2344511 00:15:13.160 23:19:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2344511 00:15:13.160 23:19:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2344511 00:15:14.568 23:19:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:14.568 23:19:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:14.568 23:19:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:14.568 23:19:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.568 23:19:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:14.568 23:19:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.568 23:19:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.568 23:19:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.471 23:19:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:16.471 00:15:16.471 real 0m12.173s 00:15:16.471 user 0m16.136s 00:15:16.471 sys 0m4.805s 00:15:16.471 23:19:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:16.471 23:19:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:16.471 ************************************ 00:15:16.471 END TEST nvmf_abort 00:15:16.471 ************************************ 00:15:16.471 23:19:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:16.471 23:19:25 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:15:16.471 23:19:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:16.471 23:19:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.471 23:19:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:16.471 ************************************ 00:15:16.471 START TEST nvmf_ns_hotplug_stress 00:15:16.471 ************************************ 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:15:16.471 * Looking for test storage... 00:15:16.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.471 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.472 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.472 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:15:16.472 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.472 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:15:16.472 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.472 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.472 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.472 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.472 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.472 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.472 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.472 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.731 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.731 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:15:16.731 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:16.731 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.731 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:16.731 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:16.731 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:16.731 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.731 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.731 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.731 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:16.731 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:16.731 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:16.731 23:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:22.008 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:22.008 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:22.008 Found net devices under 0000:86:00.0: cvl_0_0 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:22.008 Found net devices under 0000:86:00.1: cvl_0_1 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:22.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:15:22.008 00:15:22.008 --- 10.0.0.2 ping statistics --- 00:15:22.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.008 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:15:22.008 00:15:22.008 --- 10.0.0.1 ping statistics --- 00:15:22.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.008 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2348769 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2348769 00:15:22.008 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2348769 ']' 00:15:22.009 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.009 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:22.009 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.009 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:22.009 23:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.009 [2024-07-10 23:19:30.922287] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:15:22.009 [2024-07-10 23:19:30.922372] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.009 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.009 [2024-07-10 23:19:31.027379] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:22.266 [2024-07-10 23:19:31.255439] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.266 [2024-07-10 23:19:31.255481] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.266 [2024-07-10 23:19:31.255497] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.267 [2024-07-10 23:19:31.255505] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.267 [2024-07-10 23:19:31.255514] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.267 [2024-07-10 23:19:31.255639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.267 [2024-07-10 23:19:31.255699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.267 [2024-07-10 23:19:31.255710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.833 23:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:22.833 23:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:15:22.833 23:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:22.833 23:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:22.833 23:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.833 23:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.833 23:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:15:22.833 23:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:22.833 [2024-07-10 23:19:31.892339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.091 23:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:23.091 23:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.349 [2024-07-10 23:19:32.287791] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.349 23:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:23.606 23:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:15:23.865 Malloc0 00:15:23.865 23:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:23.865 Delay0 00:15:23.865 23:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:24.124 23:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:15:24.381 NULL1 00:15:24.381 23:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:24.381 23:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2349135 00:15:24.381 23:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:15:24.381 23:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:24.381 23:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.638 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.638 23:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:24.896 23:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:15:24.896 23:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:15:24.896 true 00:15:24.896 23:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:24.896 23:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.154 23:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:25.412 23:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:15:25.412 23:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:15:25.412 true 00:15:25.670 23:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:25.670 23:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.670 23:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:25.928 23:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:15:25.928 23:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:15:26.187 true 00:15:26.187 23:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:26.187 23:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:26.187 23:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:26.446 23:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:15:26.446 23:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:15:26.704 true 00:15:26.704 23:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:26.705 23:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:26.705 23:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:26.964 23:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:15:26.964 23:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:27.223 true 00:15:27.223 23:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:27.223 23:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:27.482 23:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:27.482 23:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:15:27.482 23:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:27.741 true 00:15:27.742 23:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:27.742 23:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.000 23:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:28.000 23:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:28.000 23:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:28.259 true 00:15:28.259 23:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:28.259 23:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.518 23:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:28.518 23:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:28.518 23:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:28.778 true 00:15:28.778 23:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:28.778 23:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.157 Read completed with error (sct=0, sc=11) 00:15:30.157 23:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:30.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:30.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:30.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:30.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:30.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:30.157 23:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:30.157 23:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:30.417 true 00:15:30.417 23:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:30.417 23:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.354 23:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:31.354 23:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:31.354 23:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:31.612 true 00:15:31.612 23:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:31.612 23:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.871 23:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:31.871 23:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:15:31.871 23:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:32.130 true 00:15:32.130 23:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:32.130 23:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.508 23:19:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:33.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.508 23:19:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:15:33.508 23:19:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:33.767 true 00:15:33.767 23:19:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:33.767 23:19:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.704 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.704 23:19:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:34.704 23:19:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:15:34.704 23:19:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:34.962 true 00:15:34.962 23:19:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:34.962 23:19:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.221 23:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:35.221 23:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:15:35.221 23:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:35.479 true 00:15:35.479 23:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:35.479 23:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.479 23:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:35.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.743 [2024-07-10 23:19:44.725292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.725384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.725439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.725491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.725547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.725603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.725655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.725705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.725755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.725800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.725853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.725890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.725933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.725977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.726027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.726075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.726127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.726187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.726235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.726275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.726322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.743 [2024-07-10 23:19:44.726366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.726407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.726455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.726505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.726548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.726595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.726645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.726691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.726738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.726783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.726826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.726871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.726920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.726972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.727964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.728013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.728060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.728115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.728173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.728227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.728275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.728326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.728378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.728428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.728477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.728685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.728740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.728789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.728838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.728890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.728933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.728972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.729992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.730040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.730882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.730948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.731976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.732028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.732085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.732136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.732199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.732252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.732306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.732347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.732393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.732443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.732488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.732538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.744 [2024-07-10 23:19:44.732586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.732638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.732683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.732731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.732784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.732832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.732887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.732925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.732970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.733983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.734026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.734070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.734285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.734342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.734402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.734453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.734504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.734555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.734608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.734658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.734711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.734759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.734821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.734877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.734933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.734988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.735976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.736997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.737043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.737092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.737134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.737186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.737233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.737282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.737336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.737385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.737435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.737486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.738346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.738407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.738460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.738517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.738567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.738616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.738677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.738735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.738786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.745 [2024-07-10 23:19:44.738840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.738892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.738948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.739962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.740996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.741042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.741088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.741142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.741193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.741247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.741302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.741352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.741408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.741466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.741518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.741572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.741626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.741832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.741891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.741942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.741995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.742049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.742104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.742155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.742207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.742254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.742301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.742349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.742398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.742453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.742503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.742543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.742592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.742638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.743997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.744053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.744106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.744165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.744217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.744269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.744328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.744379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.744429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.744481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.744533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.744588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.744642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.744692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.746 [2024-07-10 23:19:44.744745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.744798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.744855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.744907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.744960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.745974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.746017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.746071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.746121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.746181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.746246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.746297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.746348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.746681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.746734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.746783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.746833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.746879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.746929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.746976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.747978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.748968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 [2024-07-10 23:19:44.749993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.747 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:35.748 [2024-07-10 23:19:44.750903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.750965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.751993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.752964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.753988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.754043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.754093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.754344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.754405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.754463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.754517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.754569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.754621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.754674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.754726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.754779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.754831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.754884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.754938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.754995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.755053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.755113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.755157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.755214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.755265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.755311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.755360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.755418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.755475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.755521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.755563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.755611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.755657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.755707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.755755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.748 [2024-07-10 23:19:44.755799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.755844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.755890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.755937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.755985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.756033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.756087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 23:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:35.749 [2024-07-10 23:19:44.756137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.756203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.756256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.756301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.756349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.756393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.756453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 23:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:35.749 [2024-07-10 23:19:44.756503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.756556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.756607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.756667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.756723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.756773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.756827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.756883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.756938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.756995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.757047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.757100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.757153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.757214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.757267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.757328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.757378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.757429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.757482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.757532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.757593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.757645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.758476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.758529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.758573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.758625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.758672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.758722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.758778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.758826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.758873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.758931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.758979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.759975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.760954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.761008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.761060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.761109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.761173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.761225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.761282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.761334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.761385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.761443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.761494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.761546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.761594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.761648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.761708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.761761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.761987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.762033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.762083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.762131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.749 [2024-07-10 23:19:44.762188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.762246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.762296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.762344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.762393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.762441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.762489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.762540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.762589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.762644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.762696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.762748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.763354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.763408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.763457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.763506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.763555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.763600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.763641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.763683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.763731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.763778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.763832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.763882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.763937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.763989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.764995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.765967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.766023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.766075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.766127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.766186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.766240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.766294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.766348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.766401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.766455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.766505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.766560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.766615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.766830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.766885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.766940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.766993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.767951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.768003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.768055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.768106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.768170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.768223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.768272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.750 [2024-07-10 23:19:44.768329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.768375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.768422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.768471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.768521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.768568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.768619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.768667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.768715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.768765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.768812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.768863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.768910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.768959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.769007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.769048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.769100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.769146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.769218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.769718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.769773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.769822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.769867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.769926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.769967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.770982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.771962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.772916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.773113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.773171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.773221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.773273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.773331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.773381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.773432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.773486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.773537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.773588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.773645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.773699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.773751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.773802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.773859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.773917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.774671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.774726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.774783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.774833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.751 [2024-07-10 23:19:44.774883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.774932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.774984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.775948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.776965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.777985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.778194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.778242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.778293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.778339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.778388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.778442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.778493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.778540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.778582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.778634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.778681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.778728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.778775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.778824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.778873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.778920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.778970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.779016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.779065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.779118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.779171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.779226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.779277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.779327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.779382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.779440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.779495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.779545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.779599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.779653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.779710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.752 [2024-07-10 23:19:44.779761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.779815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.779871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.779922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.779981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.780949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.781002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.781054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.781106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.781148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.781200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.781248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.781296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.781345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.781392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.782223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.782272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.782320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.782367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.782413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.782463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.782509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.782558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.782610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.782655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.782705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.782754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.782802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.782849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.782899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.782952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.783987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.784998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.785050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.785102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.785155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.785213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.785266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.785312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.785363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.785410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.785459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.785508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.785731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.785783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.785842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.785886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.785932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.785983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.786037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.786084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.753 [2024-07-10 23:19:44.786143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.786195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.786247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.786297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.786346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.786395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.786441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.786490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.786542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.786595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.786648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.786698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.786749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.786803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.786863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.786918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.786976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.787998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.788970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.789847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.789900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.789951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.789998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.790995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.791969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.792021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.792076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.792129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.792191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.792241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.792292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.792343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.792392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.792455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.754 [2024-07-10 23:19:44.792507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.792554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.792604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.792652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.792704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.792751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.792799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.792848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.792893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.792941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.792992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.793042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.793095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.793145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.793382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.793436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.793486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.793536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.793586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.793634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.793679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.793726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.793770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.793820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.793868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.793923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.793975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.794979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.795975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.796027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.796074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.796127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.796183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.796228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.796278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.796328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.796379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.796431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.796477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.796526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.796577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.797497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.797549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.797596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.797643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.797694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.797743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.797798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.797849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.797903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.797954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.755 [2024-07-10 23:19:44.798989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.799996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.800048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.800125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.800179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.800224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.800273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.800321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.800368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.800431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.800476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.800523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.800569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.800617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.800663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.800711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.800755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.800995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.801998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.802965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.756 [2024-07-10 23:19:44.803869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.803917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.803970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.804017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.804072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.804120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.804174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.804234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.805188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.805248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.805300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.805348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.805400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.805452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.805504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.805560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.805614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.805663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.805715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.805767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.805818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.805869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.805922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.805976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.806033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.806086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.806139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.806198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.806248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.806303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.806361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.806412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.806466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.806518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.806577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.806633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:35.757 [2024-07-10 23:19:44.806692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.806745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.806799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.806849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.806902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.806955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.807963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.808013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.808062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.808106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.808167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.050 [2024-07-10 23:19:44.808221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.808269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.808318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.808363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.808409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.808454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.808662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.808716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.808767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.808820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.808869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.808929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.808990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.809052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.809109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.809171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.809236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.809290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.809348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.809401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.809453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.809504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.809562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:36.051 [2024-07-10 23:19:44.810084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.810141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.810203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.810261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.810311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.810366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.810417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.810474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.810526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.810576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.810625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.810677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.810722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.810771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.810815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.810869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.810916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.810966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.811981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.812968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.813018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.813075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.813124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.813187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.813230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.813281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.813331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.813526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.813571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.813621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.813673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.813726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.813776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.813828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.813878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.813928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.813975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.814023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.814072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.814115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.814166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.814218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.814271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.814324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.051 [2024-07-10 23:19:44.814372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.814421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.814470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.814517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.814567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.814619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.814672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.814722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.814775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.814832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.814891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.814945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.814999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.815977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.816029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.816080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.816130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.816194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.816254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.816304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.816356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.816408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.816466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.816521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.816571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.816622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.816673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.816732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.816793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.817664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.817725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.817773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.817821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.817872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.817926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.817972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.818968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.819982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.820032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.820088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.820140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.820200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.820252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.820300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.820352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.820410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.820462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.820513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.820553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.820601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.052 [2024-07-10 23:19:44.820646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.820691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.820741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.820796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.820844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.821074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.821133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.821187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.821235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.821286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.821336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.821382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.821433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.821485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.821531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.821582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.821628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.821675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.821729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.821779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.821829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.821882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.822420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.822480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.822527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.822576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.822618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.822666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.822708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.822755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.822805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.822858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.822914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.822964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.823988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.824973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.825022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.825073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.825119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.825171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.825220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.825264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.825311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.825357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.825415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.825470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.825526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.825580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.825634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.825684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.825886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.825937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.825992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.826051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.826109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.826164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.826213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.826266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.826318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.826376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.826438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.826491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.826545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.826599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.826649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.826707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.826760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.053 [2024-07-10 23:19:44.826814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.826865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.826917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.826972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.827984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.828032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.828082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.828136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.828194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.828243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.828853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.828909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.828953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.829993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.830978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.831993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.054 [2024-07-10 23:19:44.832049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.832094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.832144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.832200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.832411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.832460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.832511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.832561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.832610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.832660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.832711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.832764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.832823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.832876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.832929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.832983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.833032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.833083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.833141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.833201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.833260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.834991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.835960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.836987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.837036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.837076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.837125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.837182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.837228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.837280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.837479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.837534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.837584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.837626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.837672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.837717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.837768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.837823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.837876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.837923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.837975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.838020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.838068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.838113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.838166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.838219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.838270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.055 [2024-07-10 23:19:44.838318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.838367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.838428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.838480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.838534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.838587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.838641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.838698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.838760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.838813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.838868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.838919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.838969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.839977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.840022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.840070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.840120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.840175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.840224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.840273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.840331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.840385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.840431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.840476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.840522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.840567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.840613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.840666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.841269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.841333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.841386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.841445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.841501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.841556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.841608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.841662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.841718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.841778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.841831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.841889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.841943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.841999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.842996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.843968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.844020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.844072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.844123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.844177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.844219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.844269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.844317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.844364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.844411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.056 [2024-07-10 23:19:44.844458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.844507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.844555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.844601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.845470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.845530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.845582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.845633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.845688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.845742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.845798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.845858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.845914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.845962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.846953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.847998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.848046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.848098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.848145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.848201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.848252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.848299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.848344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.848393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.848442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.848486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.848539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.848595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.848647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.848703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.848907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.848961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.849013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.849064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.849121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.849184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.849244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.849299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.849348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.849403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.849453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.849507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.849563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.849614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.849666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.849718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.850229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.850292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.850339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.850384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.850431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.850479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.850533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.057 [2024-07-10 23:19:44.850585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.850634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.850684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.850733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.850784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.850832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.850878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.850924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.850971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.851962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.852994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.853048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.853102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.853156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.853215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.853267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.853317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.853366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.853420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.853474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.853680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.853734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.853787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.853836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.853894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.853943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.853989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.854994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.855996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.856050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.058 [2024-07-10 23:19:44.856104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.856168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.856218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.856269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.856320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.856375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.856431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.856488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.856545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.856604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.856663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.856713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.856765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.856816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.856870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.857696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.857754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.857810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.857854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.857896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.857946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.857991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.858958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.859961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.860011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.860063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.860116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.860170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.860220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.860269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.860317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.860362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.860413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.059 [2024-07-10 23:19:44.860466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.860518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.860566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.860618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.860664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.860711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.860763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.860817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.860870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.860917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.861123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.861179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.861229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.861278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.861333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.861421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.861476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.861528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.861581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.861632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.861688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.861747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.861802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.861858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.861911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.861968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.862021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.862577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.862641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.862696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.862749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.862805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.862856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.862909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.862963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.863944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.864959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.865012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.865065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.865120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.865183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.865236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.865289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.865346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.865398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.865452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.865507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.865565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.865623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.865675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.865725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.865778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.865829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.865885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.866090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.866144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.866203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.866255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.866310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.866362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.866417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.866469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.866524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.866566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.866610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.866655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.866709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.866760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.060 [2024-07-10 23:19:44.866811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.866861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.866918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.866971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.867957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.868005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.868058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.868106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.868153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.868207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.868262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.868312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.868366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:36.061 [2024-07-10 23:19:44.869112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.869186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.869243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.869296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.869345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.869399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.869462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.869513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.869566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.869617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.869673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.869732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.869788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.869843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.869897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.869951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.870981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.871031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.871081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.871130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.871188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.871232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.871278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.871330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.871375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.871425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.871472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.871520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.871566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.871615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.061 [2024-07-10 23:19:44.871663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.871707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.871758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.871808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.871859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.871912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.871966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.872025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.872075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.872129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.872187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.872241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.872294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.872347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.872404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.872607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.872670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.872728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.872780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.872835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.872887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.872939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.872992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.873959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.874008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.874064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.874121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.874178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.874231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.874279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.874328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.874377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.874842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.874892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.874942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.874990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.875963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.876968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.877024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.877080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.877130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.877187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.877242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.877296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.877341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.877392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.877441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.877489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.877542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.877590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.877644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.877694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.877742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.062 [2024-07-10 23:19:44.877797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.877849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.877894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.877944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.877990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.878047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.878103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.878153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.878346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.878411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.878459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.878507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.878558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.878608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.878654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.878705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.878754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.878801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.878846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.878896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.878943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.878992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.879042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.879093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.879146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.879207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.879262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.879316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.879369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.879425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.879487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.879545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.879597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.879650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.879704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.879759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.879817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.880591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.880653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.880706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.880760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.880815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.880874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.880924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.880965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.881985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.882968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.883022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.883079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.883136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.883193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.883250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.883305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.883362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.883425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.883480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.883532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.883585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.063 [2024-07-10 23:19:44.883639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.883698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.883749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.883802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.883858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.883910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.884124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.884187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.884244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.884296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.884352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.884405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.884457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.884529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.884583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.884633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.884673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.884721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.884770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.884821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.884867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.884913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.884962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.885964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.886991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.887038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.887082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.887135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.887188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.887248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.887297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.887345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.888344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.888397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.888444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.888495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.888548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.064 [2024-07-10 23:19:44.888601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.888658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.888712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.888764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.888823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.888881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.888934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.888983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.889952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.890994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.891046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.891100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.891152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.891209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.891261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.891312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.891366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.891421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.891473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.891529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.891579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.891629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.891689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.891939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.891997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.892953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.893955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.894008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.894066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.894123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.894182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.894241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.065 [2024-07-10 23:19:44.894295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.894346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.894401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.894453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.894506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.894562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.894614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.894668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.894725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.894774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.894826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.894883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.894940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.894992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.895043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.895096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.895151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.895988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.896972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.897947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.898977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.899026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.899080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.899130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.899185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.899232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.899281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.899517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.899570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.899627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.899677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.899727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.899780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.899829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.899881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.899930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.899987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.900035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.900084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.900133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.900188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.900258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.066 [2024-07-10 23:19:44.900305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.900359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.900413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.900468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.900524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.900583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.900638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.900696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.900749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.900802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.900855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.900917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.900968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.901960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.902014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.902074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.902126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.902185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.902227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.902284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.902333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.902382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.902434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.902486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.902540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.902590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.902644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.902696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.902748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.902791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.902838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.903738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.903793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.903843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.903891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.903941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.903987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.904950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.905965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.906011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.906057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.906106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.906156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.906212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.906267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.906324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.906384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.906435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.906483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.906527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.906574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.906624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.067 [2024-07-10 23:19:44.906682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.906731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.906777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.906828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.906876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.906928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.906976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.907021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.907257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.907307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.907352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.907399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.907448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.907501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.907551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.907598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.907644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.907695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.907762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.907813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.907867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.907918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.907969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.908971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.909955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.910007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.910050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.910099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.910149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.910206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.910263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.910310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.910359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.910410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.910462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.910512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.911138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.911195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.911251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.911304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.911366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.911417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.911471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.911523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.911574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.911625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.911686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.911753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.911805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.911859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.911916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.911972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.912024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.912078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.912132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.068 [2024-07-10 23:19:44.912193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.912246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.912301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.912357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.912414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.912468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.912523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.912579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.912631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.912690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.912743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.912798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.912854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.912911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.912963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.913970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.914026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.914075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.914121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.914180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.914226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.914273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.914322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.914370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.914417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.914466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.915439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.915503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.915558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.915609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.915660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.915711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.915754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.915800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.915855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.915910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.915959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.916980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.917987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.918039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.918089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.918145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.918213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.918263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.918318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.918371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.918424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.918483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.918536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.918586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.069 [2024-07-10 23:19:44.918638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.918693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.918746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.918953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.919004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.919056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.919105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.919172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.919215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.919260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.919309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.919356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.919412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.919462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.919516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.919574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.919625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.919675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.919725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.920219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.920273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.920322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.920372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.920420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.920467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.920516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.920561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.920616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.920669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.920727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.920777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.920834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.920885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.920937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.920992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.921050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.921104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.921167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.921220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 true 00:15:36.070 [2024-07-10 23:19:44.921267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.921329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.921382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.921432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.921481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.921540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.921591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.921646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.921698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.921750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.921806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.921861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.921912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.921962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.922948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.923000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.923044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.923093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.923157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.923214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.923265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.923318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.923368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.923425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.923478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.923527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.923768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.923818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.923864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.923911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.923961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.924010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.924059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.924106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.924165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.924220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.924281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.924332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.924384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.924437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.924494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.924546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.924596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.924653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.924702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.924753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.925400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.925462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.070 [2024-07-10 23:19:44.925514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.925569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.925629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.925680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.925733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.925787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.925841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.925894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.925947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.926977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.927976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.928028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.928077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.928129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.928193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.928244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.928296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.928351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.928403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.928458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.928515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.928569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.928625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.928904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.928952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.929962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.930961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.931016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.931066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.931118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.071 [2024-07-10 23:19:44.931182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.931234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.931289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.931338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.931392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.931453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.931505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.931557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.931608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.931666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.931718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.931767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.931819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.931873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.931927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.931978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.932023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.932074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.932128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:36.072 [2024-07-10 23:19:44.932978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.933958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.934988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.935964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.936018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.936066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.936115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.936168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.936216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.936268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.936454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.936510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.936552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.936600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.936646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.936695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.936744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.072 [2024-07-10 23:19:44.936795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.936841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.936889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.936938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.936989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.937037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.937090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.937143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.937204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.937258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.937811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.937868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.937920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.937972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.938960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.939962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.940968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.941023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.941081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.941291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.941343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.941392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.941447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.941499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.941551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.941593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.941646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.941689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.941739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.941786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.941838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.941894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.941940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.941990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.942035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.942086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.942134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.942192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.942246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.942289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.942337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.942388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.942434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.942486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.942536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.942587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.942636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.942686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.942735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.073 [2024-07-10 23:19:44.942783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.942834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.942877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.942925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.942973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.943021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.943073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.943132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.943199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.943255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.943309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.943371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.943423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.943478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.943527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.943582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 23:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:36.074 [2024-07-10 23:19:44.944320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.944379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 23:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.074 [2024-07-10 23:19:44.944429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.944486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.944531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.944577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.944617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.944665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.944709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.944762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.944808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.944856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.944910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.944961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.945968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.946954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.947008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.947061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.947111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.947173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.947224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.947277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.947326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.947376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.947430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.947485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.947539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.947594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.947650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.947847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.947897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.947938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.947984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.074 [2024-07-10 23:19:44.948967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.949018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.949063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.949115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.949169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.949216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.949265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.949313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.949364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.949409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.949461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.950119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.950191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.950249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.950313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.950369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.950419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.950471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.950525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.950583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.950640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.950691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.950746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.950796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.950848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.950898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.950947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.950996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.951975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.952985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.953037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.953086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.953139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.953204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.953257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.953306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.953368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.953423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.953626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.953680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.953730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.953781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.953838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.953891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.953947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.953997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.954995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.955049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.075 [2024-07-10 23:19:44.955092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.955711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.955768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.955822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.955879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.955930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.955981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.956964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.957983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.958929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.959131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.959187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.959235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.959292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.959346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.959400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.959452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.959503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.959559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.959610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.959660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.959712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.959767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.959819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.959869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.959926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.959977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.960028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.960077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.960127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.960188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.960246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.960303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.960357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.960408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.960460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.960509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.960564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.960616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.960677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.960729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.960782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.960832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.960886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.961549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.961606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.961659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.961711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.961758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.961814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.076 [2024-07-10 23:19:44.961861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.961906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.961960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.962956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.963981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.964032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.964089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.964146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.964210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.964266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.964318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.964370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.964425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.964478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.964535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.964589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.964641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.964687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.964740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.964788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.964838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.965994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.966041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.966090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.966139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.966196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.966245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.966290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.966336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.966383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.966434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.966485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.966537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.966589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.966640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.077 [2024-07-10 23:19:44.966697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.966751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.966803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.966853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.966904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.966961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.967011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.967069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.967123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.967179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.967237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.967294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.967348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.968960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.969983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.970947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.971005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.971060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.971113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.971174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.971224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.971279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.971482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.971533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.971588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.971642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.971702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.971753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.971803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.971851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.971897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.971944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.971990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.972046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.972101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.972153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.972210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.972259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.972315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.972366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.972415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.972460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.972510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.972560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.972618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.972669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.972719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.972770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.078 [2024-07-10 23:19:44.972816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.972867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.972922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.972961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.973009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.973055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.973105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.973164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.973850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.973912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.973965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.974950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.975980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.976984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.977031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.977079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.977129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.977350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.977403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.977452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.977502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.977556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.977607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.977657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.977708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.977763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.977816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.977866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.977919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.977977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.978028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.978085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.978140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.978202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.978257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.978311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.978365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.978414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.978463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.978517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.978571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.978622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.978677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.978730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.978779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.978833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.979451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.979506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.979560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.979615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.979669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.079 [2024-07-10 23:19:44.979718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.979764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.979818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.979867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.979914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.979962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.980976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.981974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.982025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.982081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.982135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.982195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.982247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.982297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.982351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.982404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.982456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.982507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.982558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.982611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.982663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.982861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.982911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.982953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.983951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.984985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.985041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.985091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.985145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.080 [2024-07-10 23:19:44.985204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.985267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.985324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.985375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.985429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.985482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.985535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.985586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.985631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.985682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.985726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.985783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.985829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.985877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.985926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.985976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.986031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.987991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.988959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.989988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.990038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.990088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.990136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.990192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.990240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.990294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.990351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.990564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.990618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.990673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.990723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.990772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.990823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.990871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.990925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.081 [2024-07-10 23:19:44.990972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:36.082 [2024-07-10 23:19:44.991013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.991061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.991115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.991180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.991229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.991276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.991327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.991380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.991849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.991899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.991952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.992972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.993959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.994962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.995010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.995062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.995114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.995326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.995379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.995437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.995491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.995541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.995593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.995645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.995699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.995756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.995813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.995863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.995915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.995969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.082 [2024-07-10 23:19:44.996901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.996961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.997010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.997057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.997100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.997147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.997201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.997248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.997299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.997348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.997394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.997444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.997490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.997541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.997591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.998417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.998478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.998532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.998584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.998635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.998693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.998751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.998806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.998861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.998913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.998966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:44.999995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.000977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.001030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.001073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.001122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.001179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.001227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.001286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.001335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.001388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.001441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.001492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.001548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.001600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.001651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.001850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.001908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.001961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.002983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.003041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.003093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.003143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.003203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.083 [2024-07-10 23:19:45.003253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.003306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.003355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.003407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.003458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.003505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.003562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.003613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.004257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.004306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.004356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.004405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.004452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.004498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.004548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.004600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.004651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.004699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.004750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.004795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.004845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.004910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.004960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.005985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.006973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.007023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.007072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.007126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.007197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.007253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.007301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.007353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.007400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.007447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.007493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.007539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.007738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.007788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.007836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.007884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.007933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.007986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.008975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.009031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.009083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.009136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.009194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.009682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.009741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.009786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.009836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.084 [2024-07-10 23:19:45.009883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.009936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.009981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.010968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.011988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.012926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.013970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.014016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.014066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.014111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.014166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.014220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.014272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.014328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.014381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.014434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.014492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.014544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.014592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.014644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.014698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.085 [2024-07-10 23:19:45.014754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.014805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.015605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.015663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.015718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.015771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.015822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.015875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.015928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.015978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.016966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.017955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.018007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.018061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.018117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.018179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.018235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.018291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.018347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.018403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.018455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.018509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.018562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.018614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.018669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.018720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.018773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.018824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.019961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.086 [2024-07-10 23:19:45.020986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.021991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.022041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.022099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.022151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.022210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.023951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.024992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.025986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.026042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.026088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.026137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.026195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.026240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.026426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.026478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.026528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.026574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.026623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.026669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.026717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.026764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.026813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.026862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.026908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.026955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.027002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.027055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.027110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.027171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.027227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.087 [2024-07-10 23:19:45.027669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.027723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.027767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.027818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.027865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.027917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.027969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.028972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.029955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.030929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.031132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.031193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.031245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.031296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.031350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.031399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.031454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.031508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.031564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.031616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.031667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.031720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.031778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.031833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.031886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.031937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.031986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.032955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.033005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.033056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.033110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.033156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.088 [2024-07-10 23:19:45.033208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.033258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.033304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.033352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.033410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.033455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.034339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.034402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.034454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.034506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.034560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.034618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.034678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.034739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.034790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.034841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.034893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.034945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.034997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.035960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.036970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.037019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.037071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.037128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.037185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.037228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.037275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.037322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.037373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.037428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.037479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.037529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.037579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.037625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.037833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.037879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.037938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.037987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.038040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.038089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.038146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.038210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.038268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.038322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.038371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.038420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.038475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.038532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.038584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.038638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.038697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.039238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.039297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.039352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.039408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.089 [2024-07-10 23:19:45.039458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.039512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.039568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.039622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.039675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.039728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.039770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.039819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.039868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.039916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.039972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.040987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.041965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.042023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.042086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.042139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.042201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.042259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.042310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.042364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.042417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.042477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.042534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.042738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.042795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.042845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.042902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.042953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.043986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.044979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.090 [2024-07-10 23:19:45.045030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.045079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.045135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.045195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.045254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.045308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.045361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.045410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.045459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.045516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.045568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.045619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.045674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.045727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.045775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.045828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.045876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.045918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.046782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.046840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.046890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.046940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.046995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.047997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.048952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.049978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.050029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.050079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.050137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.050368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.050426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.050477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.050529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.050581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.050636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:36.091 [2024-07-10 23:19:45.050685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.050736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.050786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.050837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.050891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.050940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.050991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.051039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.051091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.051140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.051193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.051241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.051298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.051348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.051396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.051447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.091 [2024-07-10 23:19:45.051502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.051552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.051597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.051641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.051696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.051753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.051805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.051848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.051895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.051942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.051990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.052985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.053035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.053089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.053141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.053202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.053255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.053298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.053342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.053396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.053444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.053495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.053546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.054558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.054611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.054658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.054704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.054754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.054807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.054859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.054916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.054968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.055979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.056997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.057049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.057102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.057168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.057226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.057280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.057335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.092 [2024-07-10 23:19:45.057386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.057439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.057495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.057554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.057608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.057662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.057715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.057771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.057829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.057884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.057938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.058184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.058239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.058289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.058338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.058394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.058437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.058484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.058533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.058593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.058641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.058689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.058740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.058791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.058842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.058889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.058941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.058987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.059981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.060956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.061007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.061060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.061118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.061170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.061222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.061270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.061318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.061366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.061995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.062053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.062101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.062152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.062206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.062261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.062317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.062373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.062433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.062493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.062542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.062599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.062655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.062708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.093 [2024-07-10 23:19:45.062765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.062818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.062874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.062937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.062987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.063954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.064979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.065030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.065077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.065127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.065180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.065232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.065284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.065330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.066177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.066238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.066288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.066340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.066390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.066451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.066512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.066562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.066613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.066666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.066720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.066772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.066826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.066877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.066929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.066980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.067964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.068013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.094 [2024-07-10 23:19:45.068061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.068970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.069021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.069071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.069120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.069177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.069231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.069283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.069332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.069388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.069449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.069654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.069712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.069765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.069819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.069874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.069925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.069978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.070039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.070091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.070144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.070201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.070254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.070306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.070364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.070420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.070473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.070910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.070968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.071991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.072978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.073965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.074015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.074068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.095 [2024-07-10 23:19:45.074124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.074179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.074234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.074441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.074495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.074540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.074586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.074637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.074688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.074740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.074794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.074843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.074897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.074945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.074997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.075996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.076048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.076093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.076140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.076192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.076238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.076288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.076339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.076382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.076431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.076482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.076523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.076578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.076631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.076685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.076742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.077551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.077610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.077661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.077714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.077771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.077825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.077881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.077929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.077974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.078975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.079023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.079073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.079121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.079173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.079225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.079274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.079326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.079377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.079429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.079480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.079534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.079586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.096 [2024-07-10 23:19:45.079639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.079694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.079747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.079800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.079854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.079910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.079967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.080018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.080074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.080129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.080190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.080246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.080297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.080354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.080405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.080455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.080513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.080564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.080618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.080669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.080727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.080782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.080835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.081041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.081093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.081146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.081211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.081260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.081311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.081359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.081418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.081471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.081526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.081576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.081626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.081676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.081725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.081769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.081818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.082252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.082300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.082353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.082400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.082455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.082502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.082549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.082598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.082647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.082693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.082745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.082804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.082850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.082902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.082952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.082999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.083992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.084047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.097 [2024-07-10 23:19:45.084099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.084151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.084211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.084265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.084318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.084370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.084433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.084499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.084551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.084605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.084655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.084711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.084764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.084819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.084872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.084927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.084979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.085026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.085076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.085125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.085182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.085230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.085283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.085331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.085383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.085438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.085487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.085534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.085725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.085778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.085830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.085888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.085943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.085990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.086994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.087045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.087096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.087149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.087200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.087251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.087296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.087344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.087391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.087441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.087496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.087557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.390 [2024-07-10 23:19:45.087607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.087660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.087710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.087762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.087814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.087868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.087925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.087974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.088023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.088077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.088131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.088949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.089964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.090994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.091968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.092029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.092080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.092135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.092197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.092418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.092469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.092520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.092568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.092620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.092674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.092724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.092773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.092821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.092859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.092905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.092947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.092993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.391 [2024-07-10 23:19:45.093886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.093935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.094525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.094591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.094646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.094694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.094743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.094795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.094846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.094897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.094947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.094998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.095998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.096883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.392 23:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:36.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.392 [2024-07-10 23:19:45.290137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.290214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.290268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.290322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.290374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.290422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.290476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.290523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.290571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.290617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.290667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.290713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.290763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.290804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.290851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.290898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.290945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.290996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.291986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.292040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.292103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.292157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.292217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.292267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.292320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.292374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.292427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.292479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.292534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.292583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.292632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.292681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.392 [2024-07-10 23:19:45.292731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.292784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.292842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.292890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.292942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.293991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.294951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.295965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.296019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.296071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.296121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.296183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.296232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.296295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.296345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.296399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.296453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.296502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.296554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.296602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.297600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.297656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.297700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.297745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.297791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.297832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.297888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.297939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.297992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.298984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.299041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.299091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.299143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.393 [2024-07-10 23:19:45.299204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.299256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.299305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.299355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.299404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.299456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.299507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.299558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.299610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.299662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.299716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.299765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.299819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.299869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.299921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.299975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.300877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.301972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.302960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.303016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.303064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.303114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.394 [2024-07-10 23:19:45.303174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.303233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.303284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.303331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.303381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.303434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.303485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.303536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.303576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.303622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.303671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.303716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.303763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.303817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.303866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.303914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.303962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.304005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.304053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.304094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.304144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.304201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.305194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.305258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.305310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.305359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.305414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.305465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.305520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.305576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.305629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.305688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.305734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.305785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.305839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.305888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.305943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.305995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.306051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.306105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.306156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.306219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.306269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.306324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.306376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.306429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.306482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.306533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.306631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.306693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.306745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.306843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.306899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.306949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.306999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.307986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.308039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.308089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.308138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.308193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.308240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.308284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.308330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.308376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.308422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.308469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.308519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.308754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.308802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.308854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.308908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.308964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.309014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.309066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.309116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.309179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.309235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.309289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.309343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.309394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.309447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.309502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.309553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.309603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.309653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.309705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.395 [2024-07-10 23:19:45.309755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.309806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.309861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.309915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.309967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.310980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.311978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:36.396 [2024-07-10 23:19:45.312834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.312894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.312946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.312997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.313984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.314967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.315013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.315062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.315115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.315172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.315220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.315270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.315318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.315366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.315417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.396 [2024-07-10 23:19:45.315460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.315516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.315562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.315610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.315657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.315705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.315752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.315805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.315852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.315901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.315947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.315996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.316048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.316093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.316298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.316351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.316407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.316461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.316515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.316569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.316622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.316671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.316729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.316781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.316835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.316891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.316944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.317000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.317050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.317104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.317164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.317707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.317768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.317823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.317877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.317932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.317981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.318990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.319997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.320973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.321187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.321245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.321295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.321349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.321402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.321455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.321507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.321566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.321614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.321656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.321708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.397 [2024-07-10 23:19:45.321760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.321808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.321854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.321911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.321965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 23:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:36.398 [2024-07-10 23:19:45.322331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 23:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:36.398 [2024-07-10 23:19:45.322685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.322974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.323959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.324011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.324060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.324117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.324182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.324239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.324296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.324348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.324400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.325248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.325314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.325360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.325413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.325461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.325509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.325559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.325609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.325657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.325711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.325758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.325804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.325857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.325908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.325955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.326996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.327057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.327108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.327180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.327236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.327289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.327347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.327403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.327458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.327513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.327561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.327618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.398 [2024-07-10 23:19:45.327667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.327725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.327779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.327833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.327884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.327937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.327994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.328049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.328097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.328149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.328205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.328262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.328319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.328372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.328425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.328476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.328529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.328732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.328788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.328841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.328884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.328929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.328976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.329021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.329072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.329122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.329175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.329226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.329275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.329338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.329388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.329430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.329478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.329526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.329946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.330945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.331974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.332033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.332088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.332143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.332202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.332258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.332314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.332368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.399 [2024-07-10 23:19:45.332420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.332473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.332526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.332577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.332634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.332686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.332737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.332790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.332839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.332890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.332940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.332984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.333030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.333086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.333144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.333207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.333503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.333557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.333608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.333655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.333706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.333756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.333805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.333852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.333896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.333947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.333994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.334951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.335997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.336045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.336096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.336146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.336198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.336243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.336284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.336333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.336382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.336435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.336484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.336531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.336583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.336628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.336676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.336732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.337344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.337403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.337457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.337511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.337570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.337621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.337675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.337723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.337780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.337832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.337884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.337940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.337998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.338046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.338096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.338147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.338209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.338267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.338318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.338378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.338430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.338486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.338539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.338594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.338647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.338696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.400 [2024-07-10 23:19:45.338756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.338807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.338859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.338910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.338966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.339951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.340003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.340047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.340094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.340147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.340194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.340244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.340293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.340346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.340393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.340442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.340497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.340549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.340599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.341557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.341613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.341669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.341722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.341771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.341827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.341887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.341937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.341988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.342957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.343966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.344009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.344059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.344107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.344157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.344214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.344265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.344311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.344360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.344405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.344450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.344503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.344551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.344597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.344655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.344714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.344768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.344821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.345023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.345075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.345127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.345188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.345261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.401 [2024-07-10 23:19:45.345319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.345371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.345426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.345477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.345533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.345586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.345643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.345701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.345751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.345802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.345855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.345909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.346432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.346498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.346556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.346608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.346662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.346717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.346780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.346831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.346883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.346933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.346983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.347989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.348980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.349039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.349096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.349151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.349208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.349262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.349315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.349369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.349429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.349487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.349536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.349592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.349645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.349700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.349908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.349962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.402 [2024-07-10 23:19:45.350929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.350981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.351977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.352950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.353004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.353056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.353114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.353973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.354978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.355951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.356010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.356063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.356122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.356179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.356232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.356282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.356340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.356398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.403 [2024-07-10 23:19:45.356451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.356505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.356554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.356604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.356663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.356719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.356774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.356826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.356883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.356931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.356989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.357041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.357093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.357149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.357206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.357262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.357447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.357497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.357543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.357597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.357645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.357693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.357745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.357790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.357839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.357888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.357929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.357982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.358033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.358080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.358128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.358183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.358233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.358724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.358778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.358825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.358876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.358924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.358971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.359997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.360958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.361899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.362216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.362276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.362332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.362390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.362446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.362500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.362549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.362600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.362649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.362709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.362775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.404 [2024-07-10 23:19:45.362829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.362882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.362933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.362984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.363968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.364969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.365020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.365080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.365129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.365183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.365230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.365280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.365334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.365381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.365430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.365478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.365527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.366151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.366208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.366253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.366301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.366350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.366404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.366457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.366511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.366570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.366621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.366675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.366724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.366780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.366830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.366881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.366934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.366984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.367987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.368034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.368084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.368131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.368187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.368241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.368291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.368336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.368388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.368436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.368483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.368528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.368582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.368633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.368690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.368742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.368799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.405 [2024-07-10 23:19:45.368862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.368922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.368977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.369029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.369081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.369133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.369194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.369244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.369298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.369353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.370205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.370265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.370321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.370377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.370424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.370479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.370529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.370577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.370627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.370669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.370717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.370763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.370813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.370864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.370915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.370967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.371960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.372968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.373019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.373070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.373127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.373187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.373237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.373292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.373348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.373400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.373456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.373664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.373719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 Message suppressed 999 times: [2024-07-10 23:19:45.373776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 Read completed with error (sct=0, sc=15) 00:15:36.406 [2024-07-10 23:19:45.373826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.373871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.373919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.373972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.374018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.374066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.374121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.374175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.374226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.374274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.374322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.374373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.374424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.374471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.374899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.374954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.374999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.375049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.375101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.375149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.375205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.375252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.375302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.375351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.375396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.375444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.375492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.375552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.406 [2024-07-10 23:19:45.375596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.375642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.375685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.375737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.375788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.375842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.375891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.375955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.376972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.377979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.378034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.378075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.378121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.378437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.378494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.378540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.378584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.378632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.378683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.378732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.378778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.378826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.378880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.378934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.378982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.379972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.380023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.380075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.380127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.380190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.380243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.380294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.380347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.380400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.380452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.380506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.380561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.380616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.407 [2024-07-10 23:19:45.380670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.380722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.380776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.380834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.380888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.380941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.380993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.381052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.381101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.381143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.381209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.381262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.381319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.381364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.381411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.381463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.381515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.381564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.381617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.381663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.381705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.382608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.382665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.382719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.382768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.382812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.382863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.382919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.382975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.383948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.384997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.385877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.386080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.386126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.386179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.386230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.386277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.386326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.386371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.386428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.386482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.386533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.386586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.386637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.386690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.386742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.386797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.386851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.387371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.387430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.387478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.387528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.387580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.387629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.408 [2024-07-10 23:19:45.387677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.387717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.387766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.387813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.387864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.387917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.387968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.388968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.389964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.390020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.390073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.390131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.390202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.390254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.390305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.390356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.390408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.390460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.390511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.390564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.390620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.390669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.390878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.390927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.390979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.391982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.392991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.393045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.393094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.393135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.393194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.409 [2024-07-10 23:19:45.393240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.393291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.393339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.393386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.393438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.393487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.393537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.393584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.393631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.393681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.393729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.393778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.393824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.393877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.393923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.393970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.394015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.394873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.394938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.394992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.395971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.396998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.397988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.398038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.398083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.398136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.398348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.398400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.398451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.398499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.398557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.398608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.398657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.398710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.398763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.398822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.398876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.398931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.398981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.399033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.399084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.399139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.399204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.399741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.399797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.399838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.399885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.399934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.399985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.400044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.400098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.400148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.410 [2024-07-10 23:19:45.400208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.400258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.400299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.400350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.400396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.400444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.400493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.400544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.400592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.400644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.400689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.400734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.400781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.400828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.400877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.400925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.400972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.401978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.402911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.403121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.403178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.403232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.403285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.403337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.403388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.403436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.403489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.403545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.403596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.403651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.403710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.403755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.403801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.403845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.403896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.403945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.411 [2024-07-10 23:19:45.404969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.405954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.406007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.406055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.406112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.406167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.406223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.407961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.408989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.409998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.410050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.410099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.410151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.410206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.410257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.410460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.410514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.410565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.410613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.410666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.410724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.410778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.410831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.410884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.410937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.410991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.411036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.411082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.411122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.411174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.411223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.412 [2024-07-10 23:19:45.411270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.411705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.411757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.411802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.411847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.411899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.411952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.412977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.413952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.414915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.415959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.416975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.417031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.417086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.417142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.413 [2024-07-10 23:19:45.417200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.417251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.417302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.417359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.417410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.417465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.418286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.418344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.418394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.418440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.418489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.418540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.418593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.418642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.418694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.418742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.418799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.418847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.418889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.418941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.418995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.419980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.420990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.421044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.421098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.421156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.421214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.421271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.421322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.421369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.421412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.421472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.421520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.421581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.421633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.421825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.421877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.421931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.421986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.422990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.423045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.423097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.423152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.423215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.423268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.423320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.423376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.423434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.423488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.423544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.424022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.424079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.414 [2024-07-10 23:19:45.424134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.424191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.424237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.424288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.424345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.424397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.424452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.424502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.424557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.424616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.424672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.424729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.424784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.424835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.424888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.424942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.424999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.425956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.426970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.427022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.427074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.427132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.427196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.427252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.427305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.427361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.427567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.427626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.427680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.427738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.427788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.427841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.427896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.427950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.428983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.429035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.429088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.429860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.429914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.429959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.430011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.430063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.430111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.430168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.430216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.430266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.430327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.430377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.430424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.430477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.430528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.430580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.415 [2024-07-10 23:19:45.430629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.430678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.430728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.430775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.430828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.430875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.430921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.430967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.431947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.432991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.433040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.433098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.433150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.433356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.433408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.433454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.433503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.433554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:36.416 [2024-07-10 23:19:45.433598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.433650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.433700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.433744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.433797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.433843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.433890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.433941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.433985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.434971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.435024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.435083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.435138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.435194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.435245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.435299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.435353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.435408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.435462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.435516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.416 [2024-07-10 23:19:45.435570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.435627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.435681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.435743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.435799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.435848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.435907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.435961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.436742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.436806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.436862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.436916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.436967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.437982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.438953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.439979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.440035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.440240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.440295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.440346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.440395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.440445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.440486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.440538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.440586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.440635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.440689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.440745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.440798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.440846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.440893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.440948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.440998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.441047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.441097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.441143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.441200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.441250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.441295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.441343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.441390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.441429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.441475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.441531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.441580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.417 [2024-07-10 23:19:45.441637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.441797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.441853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.441903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.441962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.442957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.443007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.443071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.443125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.443189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.443241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.443294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.443349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.443403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.443461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.443515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.443586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.444345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.444401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.444455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.444509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.444559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.444601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.444647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.444703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.444756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.444806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.444861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.444915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.444964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.445955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.446982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.447030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.447082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.686 [2024-07-10 23:19:45.447136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.447194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.447248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.447298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.447348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.447406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.447456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.447505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.447557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.447598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.447645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.447841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.447901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.447953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.448955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.449950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.450999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.451050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.451099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.451995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.452983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.453038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.453092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.453144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.453207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.453262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.453314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.453370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.453428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.453481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.453536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.453587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.687 [2024-07-10 23:19:45.453642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.453698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.453760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.453812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.453867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.453920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.453973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.454965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.455021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.455069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.455118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.455173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.455231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.455283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.455334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.455582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.455634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.455683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.455729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.455782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.455834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.455893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.455944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.456987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.457999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.458052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.458099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.458147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.458202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.458254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.458303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.458359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.458407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.458462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.458512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.458555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.458602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.458653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.458699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.458748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.458799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.458846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.459441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.459505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.459565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.459614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.459669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.459723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.459776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.459827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.459888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.459948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.688 [2024-07-10 23:19:45.460000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.460968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.461971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.462025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.462078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.462135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.462192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.462249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.462302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.462359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.462417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.462477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.462531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.462582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.462632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.462680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.462734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.463589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.463651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.463702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.463752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.463804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.463849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.463899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.463951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.464969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.465997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.466052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.466110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.466165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.466222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.689 [2024-07-10 23:19:45.466275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.466331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.466384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.466441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.466492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.466550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.466607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.466661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.466713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.466766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.466820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.466876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.466928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.467120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.467188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.467236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.467282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.467333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.467383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.467435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.467483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.467533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.467586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.467635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.467683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.467738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.467786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.467848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.467898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.468383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.468438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.468489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.468544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.468590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.468639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.468691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.468741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.468780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.468831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.468879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.468928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.468979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.469990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.470962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.471022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.471077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.471131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.471188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.471240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.471291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.690 [2024-07-10 23:19:45.471345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.471407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.471459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.471516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.471574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.471631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.471933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.471991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.472957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.473992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.474990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.475038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.475087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.475142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.476131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.476203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.476257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.476313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.476362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.476413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.476462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.476516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.476568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.476609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.476660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.476723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.476770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.476823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.476872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.476921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.476969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.477980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.691 [2024-07-10 23:19:45.478030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.478993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.479046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.479102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.479153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.479214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.479267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.479320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.479375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.479427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.479626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.479675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.479722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.479773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.479816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.479863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.479909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.479958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.480007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.480058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.480113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.480171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.480224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.480273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.480325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.480366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.480415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.480911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.480968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 true 00:15:36.692 [2024-07-10 23:19:45.481896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.481956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.482951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.483987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.484039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.692 [2024-07-10 23:19:45.484094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.484143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.484495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.484547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.484597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.484643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.484695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.484742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.484788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.484832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.484885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.484941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.484994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.485961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.486959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.487007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.487058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.487106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.487164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.487211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.487264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.487309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.487356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.487400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.487452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.487499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.487551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.487602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.487651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.487699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.487750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.488447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.488502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.488551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.488600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.488652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.488707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.488761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.488812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.488863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.488918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.488973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.489965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.490020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.490074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.490139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.490205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.490263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.490322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.490373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.490426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.490485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.490534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.693 [2024-07-10 23:19:45.490575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.490627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.490673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.490723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.490773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.490829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.490886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.490932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.490981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.491038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.491085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.491132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.491185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.491232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.491288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.491335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.491384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.491446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.491493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.491540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.491587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.491634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.491679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.491725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:36.694 [2024-07-10 23:19:45.492646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.492710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.492763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.492817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.492867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.492921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.492973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.493983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.494996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.495890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.496099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.496157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.496223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.694 [2024-07-10 23:19:45.496286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.496340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.496397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.496452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.496501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.496552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.496605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.496661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.496714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.496766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.496823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.496874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.496928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.496982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.497449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.497511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.497563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.497617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.497671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.497725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.497779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.497835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.497883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.497924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.497973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.498980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.499957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.500009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.500067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.500117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.500177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.500231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.500286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.500342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.500395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.500446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.500505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.500557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.500610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.500665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.500715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.500936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.500989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.501977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.502025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.502074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.502127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.502172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.502224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.502273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.502328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.695 [2024-07-10 23:19:45.502373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.502418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.502471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.502520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.502570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.502617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.502672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.502721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.502766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.502815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.502869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.502920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.502969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.503016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.503063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.503111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.503168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.503219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.503282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.504999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.505996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.506051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.506105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.506168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.506224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.506278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.506330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.506386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.506435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 23:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:36.696 [2024-07-10 23:19:45.506489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.506547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.506604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.506660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.506712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.506767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 23:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.696 [2024-07-10 23:19:45.506825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.506874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.506922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.506978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.507034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.507084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.507134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.507193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.507243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.507295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.507345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.507395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.507614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.507664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.507711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.507767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.507819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.507872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.507920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.507972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.508013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.508061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.508110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.508168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.508219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.508270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.508310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.508355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.508401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.508447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.508490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.508536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.508587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.696 [2024-07-10 23:19:45.508639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.508684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.508737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.508789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.508847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.508903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.508953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.509965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.510017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.510067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.510118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.510174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.510221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.510274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.510322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.510372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.510420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.510471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.510518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.510566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.510612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.510657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.510710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.510763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.511628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.511686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.511740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.511790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.511847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.511903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.511954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.512994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.513951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.697 [2024-07-10 23:19:45.514900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.515111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.515178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.515232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.515284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.515337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.515388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.515443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.515494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.515552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.515613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.515669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.515721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.515775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.515827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.515878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.515933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.515993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.516521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.516577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.516631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.516684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.516734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.516788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.516839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.516880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.516930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.516981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.517994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.518967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.519018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.519075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.519130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.519190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.519251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.519303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.519355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.519407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.519461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.519516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.519567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.519620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.519670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.519720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.519771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.519825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.520030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.520083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.520141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.520204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.520257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.520308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.520354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.520405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.520450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.520496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.520546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.520595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.698 [2024-07-10 23:19:45.520645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.520697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.520750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.520801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.520860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.520907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.520952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.520997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.521999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.522986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.523043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.523098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.523152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.523223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.524985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.525965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.526019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.526078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.526133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.526212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.526268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.526321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.526385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.526439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.526496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.526552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.526612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.526668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.526725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.526779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.526832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.526892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.699 [2024-07-10 23:19:45.526953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.527006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.527061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.527115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.527179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.527233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.527289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.527342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.527389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.527438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.527488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.527683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.527741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.527781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.527835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.527882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.527931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.527980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.528028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.528083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.528133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.528188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.528240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.528283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.528329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.528374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.528421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.528472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.528986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.529949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.530953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.531001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.531052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.531101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.531147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.531202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.531252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.531305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.531352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:36.700 [2024-07-10 23:19:45.531396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:37.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.637 23:19:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:37.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.896 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.896 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.896 23:19:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:37.896 23:19:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:37.896 true 00:15:38.155 23:19:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:38.155 23:19:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.723 23:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:38.982 23:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:38.982 23:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:39.241 true 00:15:39.241 23:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:39.241 23:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.500 23:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:39.500 23:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:39.500 23:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:39.759 true 00:15:39.759 23:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:39.759 23:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.140 23:19:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:41.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.140 23:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:41.140 23:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:41.140 true 00:15:41.399 23:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:41.399 23:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.967 23:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:42.226 23:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:42.226 23:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:42.485 true 00:15:42.485 23:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:42.485 23:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:42.744 23:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:42.744 23:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:42.744 23:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:43.003 true 00:15:43.003 23:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:43.003 23:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:44.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.382 23:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:44.382 23:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:44.382 23:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:44.382 true 00:15:44.643 23:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:44.643 23:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:44.643 23:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:44.902 23:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:44.902 23:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:45.160 true 00:15:45.161 23:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:45.161 23:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.161 23:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:45.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.419 23:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:45.419 23:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:45.678 true 00:15:45.678 23:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:45.678 23:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:46.614 23:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:46.614 23:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:15:46.614 23:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:46.873 true 00:15:46.873 23:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:46.873 23:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:47.132 23:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:47.132 23:19:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:15:47.132 23:19:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:15:47.391 true 00:15:47.391 23:19:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:47.391 23:19:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.769 23:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:48.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.770 23:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:15:48.770 23:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:15:49.029 true 00:15:49.029 23:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:49.029 23:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:49.966 23:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:49.966 23:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:15:49.966 23:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:15:50.226 true 00:15:50.226 23:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:50.226 23:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.226 23:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:50.485 23:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:15:50.485 23:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:15:50.744 true 00:15:50.744 23:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:50.744 23:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:51.682 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.940 23:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:51.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.940 23:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:15:51.940 23:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:15:52.198 true 00:15:52.198 23:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:52.198 23:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:53.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.134 23:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:53.134 23:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:15:53.134 23:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:15:53.395 true 00:15:53.395 23:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:53.395 23:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:53.693 23:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:53.693 23:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:15:53.693 23:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:15:53.951 true 00:15:53.951 23:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:53.951 23:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:54.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.214 23:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:54.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.214 [2024-07-10 23:20:03.256844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.214 [2024-07-10 23:20:03.256932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.214 [2024-07-10 23:20:03.256984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.214 [2024-07-10 23:20:03.257041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.214 [2024-07-10 23:20:03.257096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.214 [2024-07-10 23:20:03.257147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.214 [2024-07-10 23:20:03.257205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.214 [2024-07-10 23:20:03.257258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.214 [2024-07-10 23:20:03.257302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.214 [2024-07-10 23:20:03.257344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.214 [2024-07-10 23:20:03.257387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.214 [2024-07-10 23:20:03.257436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.214 [2024-07-10 23:20:03.257484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.214 [2024-07-10 23:20:03.257537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.257587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.257632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.257687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.257738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.257788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.257847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.257900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.257940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.257987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.258962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.259955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.260008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.260062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.260117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.260324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.260377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.260429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.260479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.260532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.260587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:54.215 [2024-07-10 23:20:03.260645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.260706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.260757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.260807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.260856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.260899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.260952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.261000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.261054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.261104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.261581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.261636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.261687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.261735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.261782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.261830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.261884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.261930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.261977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.262999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.263050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.263105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.263164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.263217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.263278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.263330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.263381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.263431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.263482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.215 [2024-07-10 23:20:03.263540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.263590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.263642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.263696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.263749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.263801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.263857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.263911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.263961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.264011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.264068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.264117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.264179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.264231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.264284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.264337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.264393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.264447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.264501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.264542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.264588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.264638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.264687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.264745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.264794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.264842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.265988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.266996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.267954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.268027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.268077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.268124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.268182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.268230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.268278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.268336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.268390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.269268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.269329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.269382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.269426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.269471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.269513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.269563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.269609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.269656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.269714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.269773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.269827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.269877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.269928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.269983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.216 [2024-07-10 23:20:03.270040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.270092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.270142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.270200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.270253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.270311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.270363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.270414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.270472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.270525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.270578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.270636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.270687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.270737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.270790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.270845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.270904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.270954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.271987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.272043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.272091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.272143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.272191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.272246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.272299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.272351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.272399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.272457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.272507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.272559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.272797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.272853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.272897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.272943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.272989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.273992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.274041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.274096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.274146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.274213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.274268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.274322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.274373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.274431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.274489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.274545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.274605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.274683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.274737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.274794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.274850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.274904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.274960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.275012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.275063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.275118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.275174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.275216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.275267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.275317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.275365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.275423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.275469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.275516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.275568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.275621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.275670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.275718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.217 [2024-07-10 23:20:03.275759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.275807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.275864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.275925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.275975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.276026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.276083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.276678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.276732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.276781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.276829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.276876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.276931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.276976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.277948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.278959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.279007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.279056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.279104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.279153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.279210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.279259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.279308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.279362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.279412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.279460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.218 [2024-07-10 23:20:03.279508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.279556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.279606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.279660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.279709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.279758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.279807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.279855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.279907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.279958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.280910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.280977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.281956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.282953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.515 [2024-07-10 23:20:03.283978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.284025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.284071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.284123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.284357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.284412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.284463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.284517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.284568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.284621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.284673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.284730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.284785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.284844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.284899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.284947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 23:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:15:54.516 [2024-07-10 23:20:03.285750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.285964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 23:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:15:54.516 [2024-07-10 23:20:03.286070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.286978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.287027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.287083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.287129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.287190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.287241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.287292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.287351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.287393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.287442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.287487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.287541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.287591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.287639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.288555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.288617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.288672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.288733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.288786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.288837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.288889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.288939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.288994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.289969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.290015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.290065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.290117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.290172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.290231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.290278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.290328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.290377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.516 [2024-07-10 23:20:03.290423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.290473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.290530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.290575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.290624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.290669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.290719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.290773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.290818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.290865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.290913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.290961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.291959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.292012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.292063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.292117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.292183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.292234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.292286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.292337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.292386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.292442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.292493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.292543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.292595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.292645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.292701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.292751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.293275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.293333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.293388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.293441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.293494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.293546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.293592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.293633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.293685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.293735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.293783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.293834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.293883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.293935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.293980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.294988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.295963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.296017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.296078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.296126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.296185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.296238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.296293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.517 [2024-07-10 23:20:03.296340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.296390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.296442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.296646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.296698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.296753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.296804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.296857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.296905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.296956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.297968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.298965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.299017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.299067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.299122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.299188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.299238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.299296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.299351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.299404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.299461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.299510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.299565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.299616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.299668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.299718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.299770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.300662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.300719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.300765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.300815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.300859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.300903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.300952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.300997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.301953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.302001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.302042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.302086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.302135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.302187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.302231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.302279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.302329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.302384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.302434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.302483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.302538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.518 [2024-07-10 23:20:03.302595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.302649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.302702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.302759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.302821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.302885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.302943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.302995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.303049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.303103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.303155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.303210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.303262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.303313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.303372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.303421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.303477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.303529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.303583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.303633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.303686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.303742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.303795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.303842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.303894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.304958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.305977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.306027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.306080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.306138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.306200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.306257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.306320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.306373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.306426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.306478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.306532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.306586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.306638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.306695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.306746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.306799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.519 [2024-07-10 23:20:03.306847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.306895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.306936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.306985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.307038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.307090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.307140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.307191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.307285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.307340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.308333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.308385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.308438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.308493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.308550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.308603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.308659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.308712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.308766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.308817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.308866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.308917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.308963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.309956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.310980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.311027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.311071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.311120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.311181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.311226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.311270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.311318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.311367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.311410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.311469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.311522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.311576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.311822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.311876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.311931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.311987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.312960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.313010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.313070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.313119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.313176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.313237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.313296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.313357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.313408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.313460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.520 [2024-07-10 23:20:03.313511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.313569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.313621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.313672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.313723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.313770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.313824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.313864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.313912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.313960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.314960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.315012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.315062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.315637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.315695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.315745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.315797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.315849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.315903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.315958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.316995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.317997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.318891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:54.521 [2024-07-10 23:20:03.319762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.319821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.319873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.319924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.319974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.320031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.320083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.320139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.320200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.521 [2024-07-10 23:20:03.320249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.320301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.320362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.320413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.320464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.320516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.320568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.320625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.320680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.320736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.320789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.320841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.320882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.320928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.320978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.321983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.322982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.323204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.323262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.323314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.323369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.323419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.323473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.323524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.323577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.323628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.323683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.323735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.323789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.323840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.323890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.323948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.324003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.324530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.324584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.324632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.324687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.324736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.324783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.324837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.324885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.324932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.324978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.325966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.326017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.326074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.326125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.326188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.326247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.522 [2024-07-10 23:20:03.326307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.326358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.326414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.326471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.326524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.326576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.326630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.326683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.326733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.326784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.326839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.326894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.326949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.327003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.327055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.327109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.327165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.327223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.327281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.327333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.327383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.327436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.327491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.327541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.327591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.327642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.327696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.327744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.327796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.327997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.328977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.329968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.330030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.330082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.330134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.330193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.330243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.330294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.330345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.330399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.330450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.330501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.330554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.330615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.330670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.523 [2024-07-10 23:20:03.330720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.330772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.330817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.330866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.330915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.330957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.331007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.331054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.331104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.332083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.332147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.332205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.332259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.332309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.332364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.332424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.332475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.332526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.332575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.332626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.332688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.332744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.332794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.332847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.332901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.332951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.333964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.334019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.334072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.334127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.334190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.334246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.334299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.334353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.334412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.334461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.334512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.334564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.334614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.334664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.334704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.334754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.334799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.524 [2024-07-10 23:20:03.334843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.334893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.334951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.335003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.335053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.335102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.335155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.335214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.335257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.335304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.335360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.335414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.335640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.335695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.335737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.335789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.335838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.335885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.335931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.335977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.336981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.337973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.338028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.338082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.338133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.338181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.338224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.338269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.338316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.338369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.338420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.338472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.338526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.338574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.338618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.338665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.338713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.338760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.338807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.339644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.339702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.339756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.339809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.339863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.339923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.339974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.340986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.341032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.341082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.341135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.341199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.341248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.525 [2024-07-10 23:20:03.341294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.341344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.341385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.341433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.341483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.341537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.341585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.341632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.341680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.341727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.341782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.341822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.341871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.341921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.341979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.342872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.343073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.343132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.343190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.343242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.343295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.343348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.343397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.343455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.343510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.343560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.343620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.343672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.343725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.343776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.343815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.343863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.343913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.344352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.344405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.344454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.344503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.344559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.344611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.344658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.344705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.344756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.344806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.344859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.344909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.344962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.345993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.346951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.347003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.347061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.347112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.347172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.347229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.526 [2024-07-10 23:20:03.347290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.347344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.347399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.347452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.347512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.347563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.347618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.347672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.347859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.347904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.347951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.347998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.348958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.349985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.350041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.350091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.350679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.350742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.350798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.350852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.350908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.350959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.351960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.352983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.353036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.353094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.353150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.353205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.353259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.527 [2024-07-10 23:20:03.353311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.353365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.353417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.353469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.353521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.353581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.353633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.353691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.353745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.353796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.353850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.353904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.353961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.354173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.354229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.354279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.354334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.354388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.354438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.354491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.354549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.354600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.354651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.354703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.354754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.354813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.354864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.354914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.354966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.355019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.355759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.355813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.355864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.355911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.355957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.356996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.357999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.528 [2024-07-10 23:20:03.358923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.358972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.359164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.359215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.359261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.359306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.359357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.359405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.359454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.359504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.359555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.359601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.359652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.359699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.359744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.359802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.359856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.359908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.359966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.360996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.361988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.362039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.362088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.362146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.362203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.362253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.362301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.362355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.362407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.363327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.363384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.363441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.363494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.363543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.363598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.363650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.363701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.363755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.363806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.363861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.363927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.363985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.364964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.365012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.365061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.365108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.365153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.365205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.365262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.365315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.365360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.365404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.529 [2024-07-10 23:20:03.365450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.365501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.365548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.365593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.365635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.365687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.365733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.365780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.365825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.365871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.365921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.365967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.366013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.366065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.366113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.366169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.366216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.366275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.366330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.366371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.366424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.366466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.366512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.366727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.366773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.366817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.366866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.366912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.366960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.367992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.368989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.369858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.370792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.370846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.370896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.370947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.371004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.371055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.371108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.371172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.371228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.371279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.371335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.371391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.371443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.371498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.371549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.371599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.371652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.371707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.371760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.530 [2024-07-10 23:20:03.371810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.371862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.371911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.371961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.372954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.373974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.374021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.374067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.374323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.374377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.374429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.374480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.374529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.374579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.374628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.374679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.374731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.374784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.374841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.374902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.374954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.375987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.376970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.377022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.377073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.377121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.377175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.377226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.377276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.377325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.377372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.377422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.377467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.531 [2024-07-10 23:20:03.377513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.377566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:54.532 [2024-07-10 23:20:03.378533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.378592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.378641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.378690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.378741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.378797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.378847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.378903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.378957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.379983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.380996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.381045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.381095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.381148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.381199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.381251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.381309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.381355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.381402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.381442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.381489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.381533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.381587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.381636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.381711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.381756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.381803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.382974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.383031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.383084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.383135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.383192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.383249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.532 [2024-07-10 23:20:03.383305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.383364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.383413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.383467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.383519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.383573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.383621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.383669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.383719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.383775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.383829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.383879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.383933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.383981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.384997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.385042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.385087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.385136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.385189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.386090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.386150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.386209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.386260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.386310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.386362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.386410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.386463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.386514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.386566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.386620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.386671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.386727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.386790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.386842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.386895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.386950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.387994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.388985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.389030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.389075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.389125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.389176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.389223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.389280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.389336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.389539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.389591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.389643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.533 [2024-07-10 23:20:03.389697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.389748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.389801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.389850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.389900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.389953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.390007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.390057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.390108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.390164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.390214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.390266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.390317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.390368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.390910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.390966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.391990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.392981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.393974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.394030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.394079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.394128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.394340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.394395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.394445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.394490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.394534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.394584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.394635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.394686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.394738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.394785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.394831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.394880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.394932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.394980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.395025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.395069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.395122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.395184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.395237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.395290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.395343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.395395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.395446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.395499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.395549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.395598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.395644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.395696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.534 [2024-07-10 23:20:03.395744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.395789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.395843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.395891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.395935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.395988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.396971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.397027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.397077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.397129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.397188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.397240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.397292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.397344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.397394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.397444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.397496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.397548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.398434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.398487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.398538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.398585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.398633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.398684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.398742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.398800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.398855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.398910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.398963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.399989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.400993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.401046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.401108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.401151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.401208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.401258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.401309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.401361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.401414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.401464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.401513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.401570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.401624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.401668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.401714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.401765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.401963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.402013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.402064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.402111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.402165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.402218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.535 [2024-07-10 23:20:03.402264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.402316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.402368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.402420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.402472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.402530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.402584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.402639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.402693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.402744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.402806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.403348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.403409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.403469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.403530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.403585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.403639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.403692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.403743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.403798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.403852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.403907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.403959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.404990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.405994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.406046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.406097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.406149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.406209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.406262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.406313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.406365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.406418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.406469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.406527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.406581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.406631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.406833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.406885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.406934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.406986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.407951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.408002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.536 [2024-07-10 23:20:03.408056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.408955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.409983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.410036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.410882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.410947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.411961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.412980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.537 [2024-07-10 23:20:03.413944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.413998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.414051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.414099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.414147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.414357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.414413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.414459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.414518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.414568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.414621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.414672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.414723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.414771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.414818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.414867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.414915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.414962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.415011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.415061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.415109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.415152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.415683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.415736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.415789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.415841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.415887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.415938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.415987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.416952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.417953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.418961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.419171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.419227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.419281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.419334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.419393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.419443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.419495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.419549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.419604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.419658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.419710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.419766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.419819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.419888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.419941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.419992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.420047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.420102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.420156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.420213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.538 [2024-07-10 23:20:03.420266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.420316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.420365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.420420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.420472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.420519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.420566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.420621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.420669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.420720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.420773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.420820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.420867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.420920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.420969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.421017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.421066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.421117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.421176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.421231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.421279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.421331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.421381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.421433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.421482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.421534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.422396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.422460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.422516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.422568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.422619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.422673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.422728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.422781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.422834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.422886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.422937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.422992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.423990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.424955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.425005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.425050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.425105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.425153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.425210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.425266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.425315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.425364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.425405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.425453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.425504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.425551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.425599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.425645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.425695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.425904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.425956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.426003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.426047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.426099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.426163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.426215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.426268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.426319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.426369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.426419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.426481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.426543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.426598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.426653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.539 [2024-07-10 23:20:03.426709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.426760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.427302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.427363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.427416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.427468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.427523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.427579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.427635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.427688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.427739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.427793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.427848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.427896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.427938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.427985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.428975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.429982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.430042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.430102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.430165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.430225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.430277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.430327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.430387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.430443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.430499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.430555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.430607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.430814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.430872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.430931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.430988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.431970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.540 [2024-07-10 23:20:03.432866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.432913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.432960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.433976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.434029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.434899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.434954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.435998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.436951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.437989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.438048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.438105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.438165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.438216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.438269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.438511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.438569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.438620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.438672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.541 [2024-07-10 23:20:03.438725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.438780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.438835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.438891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 Message suppressed 999 times: [2024-07-10 23:20:03.438939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 Read completed with error (sct=0, sc=15) 00:15:54.542 [2024-07-10 23:20:03.438988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.439984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.440962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.441016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.441067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.441121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.441180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.441240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.441295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.441346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.441401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.441453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.441507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.441564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.441618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.441671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.441728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.442553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.442607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.442658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.442715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.442765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.442815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.442869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.442918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.442962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.443956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.542 [2024-07-10 23:20:03.444939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.444985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.445031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.445079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.445131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.445189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.445236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.445287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.445334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.445383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.445434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.445483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.445530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.445580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.445627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.445677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.445729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.445776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.445972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.446030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.446085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.446141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.446200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.446251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.446305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.446361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.446416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.446470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.446525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.446582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.446635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.446685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.446740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.446798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.446852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.447394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.447455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.447511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.447565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.447622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.447674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.447726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.447775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.447830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.447887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.447936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.447988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.448967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.449960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.450004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.450051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.450099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.450148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.450202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.450252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.450302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.450347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.450397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.450446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.450492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.450537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.450601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.450792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.450849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.450901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.450950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.451001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.451056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.451113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.543 [2024-07-10 23:20:03.451171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.451226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.451279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.451331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.451380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.451434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.451486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.451536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.451594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.451643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.451695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.451743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.451800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.451847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.451893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.451945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.451992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.452964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.453972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.454946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.455963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.456952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.457001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.457052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.457103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.457164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.457213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.457256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.457307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.457353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.457402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.457448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.544 [2024-07-10 23:20:03.457500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.457546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.457593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.457640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.457686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.457740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.457786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.457836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.457888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.457939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.457997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.458054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.458107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.458167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.458365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.458422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.458473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.458526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.458580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.458641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.458693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.458743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.458795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.458845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.458902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.458951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.458991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.459040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.459088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.459137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.459194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.459739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.459791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.459842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.459890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.459937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.459982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 true 00:15:54.545 [2024-07-10 23:20:03.460706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.460983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.461992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.462047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.462096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.462145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.462201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.462248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.462296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.462344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.462389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.462444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.462501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.462556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.462607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.462660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.462713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.462766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.545 [2024-07-10 23:20:03.462818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.462870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.462919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.462973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.463188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.463243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.463294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.463348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.463397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.463451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.463506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.463560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.463615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.463669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.463725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.463776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.463828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.463881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.463934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.463985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.464951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.465005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.465055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.465109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.465169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.465222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.465271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.465322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.465367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.465407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.465460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.465515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.465566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.466984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.467977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.546 [2024-07-10 23:20:03.468953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.469000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.469053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.469103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.469146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.469199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.469250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.469298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.469504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.469556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.469611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.469656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.469702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.469751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.469808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.469862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.469917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.469969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.470026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.470074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.470128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.470188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.470241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.470295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.470351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.471124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.471191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.471248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.471304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.471359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.471413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.471469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.471521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.471581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.471642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.471685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.471731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.471780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.471830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.471875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.471931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.471979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.472986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.473993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.474044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.474094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.474153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.474226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.474277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.474329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.474383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.474436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.474640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.474698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.474751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.474809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.474862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.474913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.474962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.475024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.475077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.475130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.475183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.475227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.475277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.475322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.475377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.475425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.547 [2024-07-10 23:20:03.475479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.475528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.475575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.475628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.475677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.475730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.475773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.475819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.475869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.475923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.475972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.476968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.477019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.477068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.477120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.477181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.477237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.477291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.477344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.477395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.477448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.477499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.477559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.477609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.477665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.477720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.477773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.477826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.478702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.478762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.478812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.478865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.478905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.478956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.479991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.480971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.481035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.481092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.481148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.481205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.481257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.481309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.481351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.481407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.481465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.481515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.481569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.481615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.481669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.548 [2024-07-10 23:20:03.481712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.481758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.481804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.481853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.481901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.482104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.482154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.482208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.482254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.482299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.482344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.482391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.482440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.482485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.482535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.482580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.482629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.482683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.482737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.482789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.482840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.482897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.483979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.484952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.485004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.485053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.485111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.485167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.485216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.485264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.485325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.485367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.485418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 23:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:54.549 23:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:54.549 [2024-07-10 23:20:03.486321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.486386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.486439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.486493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.486550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.486606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.486655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.486705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.486762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.486818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.486874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.486928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.486985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.487034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.487089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.487139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.487203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.487260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.487313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.487369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.487429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.549 [2024-07-10 23:20:03.487482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.487540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.487594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.487650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.487702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.487755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.487806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.487853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.487904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.487955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.488991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.489040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.489086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.489143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.489204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.489257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.489301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.489350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.489398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.489447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.489492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.489539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.489589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.489640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.489839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.489887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.489935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.489989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.490040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.490092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.490148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.490206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.490261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.490320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.490374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.490425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.490478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.490531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.490583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.490634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.490686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.491232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.491290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.491342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.491403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.491457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.491511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.491564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.491620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.491679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.491732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.491781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.491831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.491872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.491919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.491965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.492958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.493009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.493060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.493108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.493154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.493208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.493261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.493306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.550 [2024-07-10 23:20:03.493356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.493403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.493451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.493499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.493547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.493594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.493647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.493702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.493763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.493813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.493864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.493918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.493974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.494030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.494085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.494138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.494197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.494254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.494308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.494366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.494421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.494471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.494685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.494741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.494791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.494844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.494895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.494944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.494999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.495977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.496994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.497041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.497097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.497148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.497205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.497259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.497314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.497370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.497422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.497478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.497532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.497583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.497641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.497696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.497750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.497806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.497857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.497911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:54.551 [2024-07-10 23:20:03.498772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.498837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.498888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.498941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.498993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.499049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.499097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.499148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.499195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.499243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.499288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.499342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.499394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.499450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.499499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.499547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.499602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.499656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.551 [2024-07-10 23:20:03.499703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.499744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.499795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.499841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.499889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.499941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.499989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.500997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.501974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.502177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.502234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.502283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.502337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.502385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.502430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.502484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.502531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.502577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.502630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.502677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.502722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.502768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.502818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.502867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.502920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.502971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.503499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.503553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.503605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.503647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.503693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.503740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.503790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.503841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.503893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.503947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.504953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.505008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.505062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.505113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.505177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.505226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.505279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.505329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.505376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.505421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.505474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.505519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.552 [2024-07-10 23:20:03.505568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.505610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.505658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.505705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.505755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.505811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.505862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.505906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.505950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.505999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.506045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.506093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.506139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.506191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.506238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.506293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.506338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.506386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.506437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.506494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.506546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.506597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.506651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.506707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.506763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.506960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.507983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.508989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.509039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.509091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.509144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.509205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.509255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.509308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.510116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.510184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.510237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.510291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.510340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.510393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.510450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.510503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.510559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.510609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.510661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.510725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.510782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.510832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.510884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.510941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.510997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.511056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.511111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.511173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.553 [2024-07-10 23:20:03.511227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.511281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.511339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.511390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.511446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.511497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.511550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.511600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.511653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.511707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.511760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.511812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.511873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.511924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.511980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.512976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.513024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.513073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.513114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.513168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.513217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.513268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.513315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.513359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.513408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.513611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.513659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.513705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.513754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.513804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.513860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.513916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.513974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.514025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.514078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.514128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.514186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.514238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.514304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.514356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.514409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.514463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.514987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.515974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.516024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.516073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.516120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.516182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.516230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.516278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.516328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.516385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.516431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.516480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.516532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.516584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.516634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.516686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.516731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.516780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.554 [2024-07-10 23:20:03.516831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.516882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.516929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.516975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.517949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.518006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.518058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.518111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.518163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.518214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.518406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.518459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.518514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.518572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.518624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.518675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.518725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.518781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.518837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.518892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.518944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.518996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.519985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.520964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.521018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.521074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.521127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.521186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.521243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.521298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.521351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.521407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.521455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.521508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.521563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.522382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.522438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.522485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.522537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.522593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.522642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.522689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.522737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.522787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.522837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.555 [2024-07-10 23:20:03.522881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.522928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.522977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.523961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.524970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.525023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.525072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.525130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.525190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.525239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.525290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.525336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.525384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.525437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.525486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.525530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.525580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.525630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.525854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.525908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.525961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.526974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.527954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.528008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.528062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.528112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.528168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.528219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.528273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.528323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.556 [2024-07-10 23:20:03.528377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.528427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.528468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.528518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.528579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.528640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.528688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.528742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.528790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.528839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.528884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.528930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.528978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.529018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.529063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.529964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.530949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.531966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.532016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.532073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.532121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.532184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.532226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.532276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.532328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.532376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.532429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.532483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.532532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.532579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.532639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.532687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.532735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.557 [2024-07-10 23:20:03.532777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.532828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.532874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.532926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.532980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.533026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.533077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.533132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.533189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.533237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.533460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.533508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.533561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.533611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.533659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.533704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.533753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.533799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.533852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.533899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.533951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.534008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.534065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.534120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.534182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.534236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.534289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.534836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.534897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.534949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.535991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.536036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.536083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.536135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.536193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.536248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.536294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.536342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.536396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.536441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.536490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.536537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.536592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.558 [2024-07-10 23:20:03.536641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.536688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.536734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.536781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.536827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.536872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.536922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.536972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.537954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.538004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.538054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.538268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.538317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.538360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.538410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.538453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.538505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.538561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.538613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.538659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.538704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.538759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.538810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.538857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.538912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.538963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.539994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.540976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.541029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.541080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.559 [2024-07-10 23:20:03.541132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.541189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.541245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.541298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.542170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.542232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.542288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.542334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.542385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.542432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.542490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.542534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.542580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.542628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.542680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.542733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.542785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.542832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.542880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.542936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.542986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.543959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.544987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.545035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.545086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.545136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.545196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.545254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.545318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.545368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.545424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.545467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.545663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.545718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.545776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.545826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.545874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.545921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.545968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.560 [2024-07-10 23:20:03.546014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.546064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.546110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.546157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.546210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.546255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.546304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.546352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.546403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.546451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.546925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.546974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.547988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.548994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.549968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.550021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.550076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.550131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.550190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.550389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.550444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.550496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.550556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.550607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.550659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.550711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.550767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.550816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.550870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.550919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.550966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.551016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.551065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.551110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.551157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.551214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.551267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.551318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.551371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.551418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.551468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.551527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.551580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.825 [2024-07-10 23:20:03.551629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.551673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.551724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.551775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.551828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.551874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.551924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.551974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.552020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.552066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.552116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.552173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.552221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.552267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.552316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.552362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.552410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.552459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.552509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.552554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.552606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.552665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.553483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.553538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.553592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.553645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.553702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.553752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.553818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.553870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.553928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.553978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.554956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.555947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.556001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.556052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.556105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.556162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.556214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.556269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.556325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.556376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.556428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.556483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.556543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.556598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.556649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.556699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.556894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.556947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:54.826 [2024-07-10 23:20:03.557501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.557961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.558005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.558053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.558098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.558150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.558209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.558254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.558300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.558350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.558399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.558446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.558495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.558534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.558582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.559997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.560955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.561014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.561066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.561119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.561176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.561232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.561282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.561325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.561375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.561425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.826 [2024-07-10 23:20:03.561473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.561527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.561580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.561630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.561684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.561733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.561773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.561827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.561875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.561924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.561971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.562028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.562076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.562124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.562177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.562230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.562273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.562469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.562517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.562564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.562608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.562656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.562703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.562752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.562805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.562857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.562913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.562966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.563942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.564703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.564759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.564808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.564857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.564909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.564954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.565957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.566975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.567948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.568151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.568209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.568257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.568305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.568360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.568411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.568462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.568513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.568561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.568608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.568660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.568704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.568755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.568803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.568854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.568917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.568969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.569957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.570976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.571031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.827 [2024-07-10 23:20:03.571087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.571136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.571191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.571242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.571289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.571338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.571396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.572302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.572358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.572406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.572457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.572515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.572568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.572629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.572679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.572730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.572784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.572834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.572887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.572940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.572993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.573957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.574987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.575034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.575085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.575132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.575188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.575236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.575286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.575337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.575388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.575431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.575478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.575529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.575576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.575794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.575842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.575897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.575940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.575991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.576044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.576099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.576167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.576220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.576273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.576326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.576382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.576434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.576484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.576541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.576595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.576647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.577182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.577240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.577292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.577352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.577404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.577455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.577507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.577561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.577612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.577665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.577727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.577777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.577829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.577878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.577928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.577981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.578955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.579972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.828 [2024-07-10 23:20:03.580027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.580081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.580133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.580197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.580251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.580302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.580358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.580416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.580624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.580674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.580725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.580778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.580830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.580870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.580916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.580966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.581959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.582993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.583045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.583103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.583158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.583214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.583268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.583326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.583379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.583434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.583489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.583541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.583593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.583647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.583700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.583751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.583804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.584659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.584725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.584777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.584840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.584895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.584947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.584999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.585991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.586971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.587028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.587076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.587129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.587191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.587246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.587308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.587360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.587413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.587470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.587527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.587580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.587637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.587691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.587747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.587803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.587853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.587904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.588103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.588154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.588212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.588267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.588318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.588360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.588407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.588457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.588503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.588564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.588618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.588668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.588717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.588768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.588818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.588869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.588917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.589353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.589409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.589461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.589511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.589560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.589610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.589657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.589703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.589746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.589798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.589847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.589886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.589936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.589985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.590034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.590085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.590137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.590201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.590254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.590308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.590357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.590413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.590469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.590517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.829 [2024-07-10 23:20:03.590569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.590626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.590681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.590735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.590787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.590844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.590898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.590948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.591969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.592022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.592072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.592126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.592182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.592234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.592287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.592334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.592387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.592435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.592490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.592540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.592596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.592891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.592941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.592995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.593974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.594974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.595958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.596016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.596066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.596113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.596175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.596227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.596855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.596910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.596960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.597985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.598969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.599024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.599070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.599117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.599180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.599228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.599283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.599339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.599385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.599431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.599485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.830 [2024-07-10 23:20:03.599534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.599583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.599629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.599676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.599724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.599779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.599832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.599883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.599934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.599984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.600036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.600089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.600141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.600729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.600785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.600841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.600895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.600954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.601985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.602992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.603975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.604836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.604895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.604948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.604999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.605973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.606993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.607959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.608012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.608062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.608270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.608329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.608383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.608431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.608476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.608532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.608581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.608632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.608672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.608715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.608766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.608813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.608869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.608921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.608968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.609026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.609468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.609520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.609568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.609620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.609669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.609715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.609760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.609804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.609850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.609901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.609945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.609989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.610044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.831 [2024-07-10 23:20:03.610098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.610154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.610211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.610267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.610319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.610375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.610433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.610488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.610544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.610596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.610649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.610702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.610753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.610806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.610857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.610908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.610960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.611958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.612003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.612055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.612105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.612151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.612211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.612260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.612308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.612358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.612402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.612445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.612491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.612535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.612588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.612638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.612693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.612974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.613994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.614960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.615972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.616024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.616070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.616127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.616174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.616703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.616755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.616804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.616850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.616899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.616950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:15:54.832 [2024-07-10 23:20:03.617294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.617976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.618960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.619005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.619052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.619102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.619154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.619212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.619263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.619308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.619355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.619407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.619456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.619512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.832 [2024-07-10 23:20:03.619562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.619617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.619670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.619723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.619778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.619834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.619886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.620739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.620799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.620855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.620907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.620964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.621956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.622984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.623971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.624024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.624074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.624287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.624345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.624394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.624446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.624506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.624557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.624610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.624656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.624707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.624748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.624793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.624842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.624896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.624946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.624996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.625050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.625497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.625551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.625606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.625667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.625712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.625761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.625841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.625899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.625950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.625997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.626982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.627946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.628000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.628052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.628099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.628164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.628213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.628255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.628302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.628348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.628398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.628448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.628499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.628552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.628601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.628643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.628692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.628741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.629082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.629135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.629190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.629244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.629348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.629403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.629456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.629513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.629572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.629625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.629681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.629735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.629787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.629842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.629897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.629950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.630004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.630055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.630112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.630169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.630220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.630270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.630324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.630373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.630431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.630484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.630540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.630598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.833 [2024-07-10 23:20:03.630651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.630703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.630756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.630808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.630860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.630916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.630972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.631948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.632002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.632048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.632101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.632153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.632205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.632257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.632302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.632351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.632400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.632937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.632989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.633954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.634953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.635977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.636025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.636080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.636132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.636191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.636244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.637972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.638969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.639948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.640005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.640057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.640109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.640171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.640223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.640277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.640333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.640538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.640591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.640635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.640681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.640729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.640784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.640832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.640889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.640936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.640984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.641035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.641094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.641145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.641194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.641246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.641297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.641895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.641951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.642000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.642050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.642094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.642132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.642187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.642232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.642288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.642331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.642393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.642445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.642494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.642542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.642600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.642657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.642710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.642763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.834 [2024-07-10 23:20:03.642819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.642876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.642938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.642991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.643961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.644966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.645020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.645071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 [2024-07-10 23:20:03.645126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:15:54.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.835 23:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:54.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.835 23:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:15:54.835 23:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:15:54.835 Initializing NVMe Controllers 00:15:54.835 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:54.835 Controller IO queue size 128, less than required. 00:15:54.835 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:54.835 Controller IO queue size 128, less than required. 00:15:54.835 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:54.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:54.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:54.835 Initialization complete. Launching workers. 00:15:54.835 ======================================================== 00:15:54.835 Latency(us) 00:15:54.835 Device Information : IOPS MiB/s Average min max 00:15:54.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2667.57 1.30 28683.85 1946.96 1013796.75 00:15:54.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13001.10 6.35 9819.00 2648.72 490100.20 00:15:54.835 ======================================================== 00:15:54.835 Total : 15668.67 7.65 13030.72 1946.96 1013796.75 00:15:54.835 00:15:55.093 true 00:15:55.093 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349135 00:15:55.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2349135) - No such process 00:15:55.093 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2349135 00:15:55.093 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:55.352 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:55.352 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:55.352 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:55.352 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:55.352 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:55.352 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:55.612 null0 00:15:55.612 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:55.612 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:55.612 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:55.871 null1 00:15:55.871 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:55.871 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:55.871 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:55.871 null2 00:15:55.871 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:55.871 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:55.871 23:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:56.129 null3 00:15:56.129 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:56.129 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:56.129 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:56.388 null4 00:15:56.388 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:56.388 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:56.388 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:56.388 null5 00:15:56.646 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:56.646 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:56.646 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:56.646 null6 00:15:56.646 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:56.646 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:56.646 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:56.905 null7 00:15:56.905 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:56.905 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:56.905 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:56.905 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.905 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:56.905 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:56.905 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:56.905 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:56.905 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.905 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:56.905 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:56.905 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2354640 2354642 2354644 2354645 2354647 2354649 2354651 2354653 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.906 23:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:57.164 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:57.164 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.164 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:57.164 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:57.164 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:57.164 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:57.164 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:57.164 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:57.164 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.164 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.165 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:57.423 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.423 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.423 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:57.423 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:57.423 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:57.423 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.423 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:57.423 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:57.423 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:57.423 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:57.423 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.683 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:57.941 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:57.941 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.941 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:57.941 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.942 23:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:58.200 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.200 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:58.200 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:58.200 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:58.200 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:58.200 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:58.200 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:58.200 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:58.458 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:58.459 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.718 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:58.977 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:58.977 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:58.977 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:58.977 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.977 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:58.977 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:58.977 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:58.977 23:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.236 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.237 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:59.237 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.237 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.237 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:59.237 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:59.237 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:59.237 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:59.237 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:59.237 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:59.237 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:59.237 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:59.237 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.496 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:59.756 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:59.756 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:59.756 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:59.756 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:59.756 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:59.756 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:59.756 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:59.756 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:59.756 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.756 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.756 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:59.756 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:59.756 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:59.756 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:00.016 23:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:00.016 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:00.016 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:00.016 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:00.016 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:00.016 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:00.016 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:00.016 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:00.275 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.275 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.275 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:00.275 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.275 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.275 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.276 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.535 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:00.795 rmmod nvme_tcp 00:16:00.795 rmmod nvme_fabrics 00:16:00.795 rmmod nvme_keyring 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2348769 ']' 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2348769 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2348769 ']' 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2348769 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2348769 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2348769' 00:16:00.795 killing process with pid 2348769 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2348769 00:16:00.795 23:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2348769 00:16:02.169 23:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:02.169 23:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:02.169 23:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:02.169 23:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.169 23:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:02.169 23:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.169 23:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.169 23:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.708 23:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:04.709 00:16:04.709 real 0m47.771s 00:16:04.709 user 3m14.411s 00:16:04.709 sys 0m14.959s 00:16:04.709 23:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.709 23:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.709 ************************************ 00:16:04.709 END TEST nvmf_ns_hotplug_stress 00:16:04.709 ************************************ 00:16:04.709 23:20:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:04.709 23:20:13 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:04.709 23:20:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:04.709 23:20:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.709 23:20:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:04.709 ************************************ 00:16:04.709 START TEST nvmf_connect_stress 00:16:04.709 ************************************ 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:04.709 * Looking for test storage... 00:16:04.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:16:04.709 23:20:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:09.989 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:09.990 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:09.990 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:09.990 Found net devices under 0000:86:00.0: cvl_0_0 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:09.990 Found net devices under 0000:86:00.1: cvl_0_1 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:09.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:16:09.990 00:16:09.990 --- 10.0.0.2 ping statistics --- 00:16:09.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.990 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:09.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:16:09.990 00:16:09.990 --- 10.0.0.1 ping statistics --- 00:16:09.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.990 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2359019 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2359019 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2359019 ']' 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.990 23:20:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.990 [2024-07-10 23:20:18.824739] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:16:09.990 [2024-07-10 23:20:18.824839] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.990 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.990 [2024-07-10 23:20:18.930934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:10.249 [2024-07-10 23:20:19.136151] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.249 [2024-07-10 23:20:19.136209] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.249 [2024-07-10 23:20:19.136224] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.249 [2024-07-10 23:20:19.136234] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.249 [2024-07-10 23:20:19.136245] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.249 [2024-07-10 23:20:19.136323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.250 [2024-07-10 23:20:19.136382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.250 [2024-07-10 23:20:19.136393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.819 [2024-07-10 23:20:19.647097] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.819 [2024-07-10 23:20:19.677275] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.819 NULL1 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2359264 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.819 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.820 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.820 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.820 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.820 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.820 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.820 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.820 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.820 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.820 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.820 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.820 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.820 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.820 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.820 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.820 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:10.820 23:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.820 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.820 23:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.079 23:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.079 23:20:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:11.079 23:20:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.079 23:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.079 23:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.647 23:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.647 23:20:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:11.647 23:20:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.647 23:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.647 23:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.906 23:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.906 23:20:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:11.906 23:20:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.906 23:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.906 23:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:12.165 23:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.165 23:20:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:12.165 23:20:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.165 23:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.165 23:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:12.424 23:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.424 23:20:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:12.424 23:20:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.424 23:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.424 23:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:12.683 23:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.683 23:20:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:12.683 23:20:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.683 23:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.683 23:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.253 23:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.253 23:20:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:13.253 23:20:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.253 23:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.253 23:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.512 23:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.512 23:20:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:13.512 23:20:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.512 23:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.512 23:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.769 23:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.769 23:20:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:13.769 23:20:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.769 23:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.769 23:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.027 23:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.027 23:20:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:14.027 23:20:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.028 23:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.028 23:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.681 23:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.681 23:20:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:14.681 23:20:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.681 23:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.681 23:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.681 23:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.681 23:20:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:14.681 23:20:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.681 23:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.681 23:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.247 23:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.247 23:20:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:15.248 23:20:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.248 23:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.248 23:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.505 23:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.505 23:20:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:15.505 23:20:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.505 23:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.505 23:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.762 23:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.762 23:20:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:15.762 23:20:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.762 23:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.762 23:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.019 23:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.019 23:20:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:16.019 23:20:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.019 23:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.019 23:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.585 23:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.585 23:20:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:16.585 23:20:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.585 23:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.585 23:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.843 23:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.843 23:20:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:16.843 23:20:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.843 23:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.843 23:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.102 23:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.102 23:20:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:17.102 23:20:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.102 23:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.102 23:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.361 23:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.361 23:20:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:17.361 23:20:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.361 23:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.361 23:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.620 23:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.620 23:20:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:17.620 23:20:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.620 23:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.620 23:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.189 23:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.189 23:20:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:18.189 23:20:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.189 23:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.189 23:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.447 23:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.447 23:20:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:18.447 23:20:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.447 23:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.447 23:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.706 23:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.706 23:20:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:18.706 23:20:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.706 23:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.706 23:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.964 23:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.964 23:20:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:18.964 23:20:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.964 23:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.965 23:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.532 23:20:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.532 23:20:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:19.532 23:20:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.532 23:20:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.532 23:20:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.791 23:20:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.791 23:20:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:19.791 23:20:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.791 23:20:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.791 23:20:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.050 23:20:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.050 23:20:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:20.050 23:20:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.050 23:20:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.050 23:20:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.308 23:20:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.308 23:20:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:20.308 23:20:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.308 23:20:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.308 23:20:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.566 23:20:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.566 23:20:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:20.566 23:20:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.566 23:20:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.566 23:20:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:21.134 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:21.134 23:20:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.134 23:20:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2359264 00:16:21.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2359264) - No such process 00:16:21.134 23:20:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2359264 00:16:21.134 23:20:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:21.134 23:20:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:21.134 23:20:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:21.134 23:20:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:21.134 23:20:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:16:21.134 23:20:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:21.134 23:20:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:16:21.134 23:20:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:21.134 23:20:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:21.134 rmmod nvme_tcp 00:16:21.134 rmmod nvme_fabrics 00:16:21.134 rmmod nvme_keyring 00:16:21.134 23:20:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:21.134 23:20:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:16:21.134 23:20:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:16:21.134 23:20:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2359019 ']' 00:16:21.134 23:20:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2359019 00:16:21.134 23:20:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2359019 ']' 00:16:21.134 23:20:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2359019 00:16:21.134 23:20:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:16:21.134 23:20:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:21.134 23:20:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2359019 00:16:21.134 23:20:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:21.134 23:20:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:21.134 23:20:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2359019' 00:16:21.134 killing process with pid 2359019 00:16:21.134 23:20:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2359019 00:16:21.134 23:20:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2359019 00:16:22.514 23:20:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:22.514 23:20:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:22.514 23:20:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:22.514 23:20:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.514 23:20:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:22.514 23:20:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.514 23:20:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.514 23:20:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.420 23:20:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:24.678 00:16:24.678 real 0m20.217s 00:16:24.678 user 0m43.853s 00:16:24.678 sys 0m7.726s 00:16:24.678 23:20:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:24.678 23:20:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:24.678 ************************************ 00:16:24.678 END TEST nvmf_connect_stress 00:16:24.678 ************************************ 00:16:24.678 23:20:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:24.678 23:20:33 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:24.678 23:20:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:24.678 23:20:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.678 23:20:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:24.678 ************************************ 00:16:24.678 START TEST nvmf_fused_ordering 00:16:24.678 ************************************ 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:24.678 * Looking for test storage... 00:16:24.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:16:24.678 23:20:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:29.957 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:29.957 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:29.957 Found net devices under 0000:86:00.0: cvl_0_0 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:29.957 Found net devices under 0000:86:00.1: cvl_0_1 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:29.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:29.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:16:29.957 00:16:29.957 --- 10.0.0.2 ping statistics --- 00:16:29.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.957 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:29.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:29.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:16:29.957 00:16:29.957 --- 10.0.0.1 ping statistics --- 00:16:29.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.957 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:16:29.957 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2364413 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2364413 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2364413 ']' 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:29.958 23:20:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:29.958 [2024-07-10 23:20:38.594482] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:16:29.958 [2024-07-10 23:20:38.594565] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.958 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.958 [2024-07-10 23:20:38.703292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.958 [2024-07-10 23:20:38.919681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.958 [2024-07-10 23:20:38.919727] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.958 [2024-07-10 23:20:38.919739] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.958 [2024-07-10 23:20:38.919749] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.958 [2024-07-10 23:20:38.919759] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.958 [2024-07-10 23:20:38.919785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.526 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.526 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:16:30.526 23:20:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:30.526 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:30.526 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.526 23:20:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.526 23:20:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.527 [2024-07-10 23:20:39.404018] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.527 [2024-07-10 23:20:39.420198] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.527 NULL1 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.527 23:20:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:30.527 [2024-07-10 23:20:39.492790] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:16:30.527 [2024-07-10 23:20:39.492848] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364658 ] 00:16:30.527 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.095 Attached to nqn.2016-06.io.spdk:cnode1 00:16:31.095 Namespace ID: 1 size: 1GB 00:16:31.095 fused_ordering(0) 00:16:31.095 fused_ordering(1) 00:16:31.095 fused_ordering(2) 00:16:31.095 fused_ordering(3) 00:16:31.095 fused_ordering(4) 00:16:31.095 fused_ordering(5) 00:16:31.095 fused_ordering(6) 00:16:31.095 fused_ordering(7) 00:16:31.095 fused_ordering(8) 00:16:31.095 fused_ordering(9) 00:16:31.095 fused_ordering(10) 00:16:31.095 fused_ordering(11) 00:16:31.095 fused_ordering(12) 00:16:31.095 fused_ordering(13) 00:16:31.095 fused_ordering(14) 00:16:31.095 fused_ordering(15) 00:16:31.095 fused_ordering(16) 00:16:31.095 fused_ordering(17) 00:16:31.095 fused_ordering(18) 00:16:31.095 fused_ordering(19) 00:16:31.095 fused_ordering(20) 00:16:31.095 fused_ordering(21) 00:16:31.095 fused_ordering(22) 00:16:31.095 fused_ordering(23) 00:16:31.095 fused_ordering(24) 00:16:31.095 fused_ordering(25) 00:16:31.095 fused_ordering(26) 00:16:31.095 fused_ordering(27) 00:16:31.095 fused_ordering(28) 00:16:31.095 fused_ordering(29) 00:16:31.095 fused_ordering(30) 00:16:31.095 fused_ordering(31) 00:16:31.095 fused_ordering(32) 00:16:31.095 fused_ordering(33) 00:16:31.095 fused_ordering(34) 00:16:31.095 fused_ordering(35) 00:16:31.095 fused_ordering(36) 00:16:31.095 fused_ordering(37) 00:16:31.095 fused_ordering(38) 00:16:31.095 fused_ordering(39) 00:16:31.095 fused_ordering(40) 00:16:31.095 fused_ordering(41) 00:16:31.095 fused_ordering(42) 00:16:31.095 fused_ordering(43) 00:16:31.095 fused_ordering(44) 00:16:31.095 fused_ordering(45) 00:16:31.095 fused_ordering(46) 00:16:31.095 fused_ordering(47) 00:16:31.095 fused_ordering(48) 00:16:31.095 fused_ordering(49) 00:16:31.095 fused_ordering(50) 00:16:31.095 fused_ordering(51) 00:16:31.095 fused_ordering(52) 00:16:31.095 fused_ordering(53) 00:16:31.095 fused_ordering(54) 00:16:31.095 fused_ordering(55) 00:16:31.095 fused_ordering(56) 00:16:31.095 fused_ordering(57) 00:16:31.095 fused_ordering(58) 00:16:31.095 fused_ordering(59) 00:16:31.095 fused_ordering(60) 00:16:31.095 fused_ordering(61) 00:16:31.095 fused_ordering(62) 00:16:31.095 fused_ordering(63) 00:16:31.095 fused_ordering(64) 00:16:31.095 fused_ordering(65) 00:16:31.095 fused_ordering(66) 00:16:31.095 fused_ordering(67) 00:16:31.095 fused_ordering(68) 00:16:31.095 fused_ordering(69) 00:16:31.095 fused_ordering(70) 00:16:31.095 fused_ordering(71) 00:16:31.095 fused_ordering(72) 00:16:31.095 fused_ordering(73) 00:16:31.095 fused_ordering(74) 00:16:31.095 fused_ordering(75) 00:16:31.095 fused_ordering(76) 00:16:31.095 fused_ordering(77) 00:16:31.095 fused_ordering(78) 00:16:31.095 fused_ordering(79) 00:16:31.095 fused_ordering(80) 00:16:31.095 fused_ordering(81) 00:16:31.095 fused_ordering(82) 00:16:31.095 fused_ordering(83) 00:16:31.095 fused_ordering(84) 00:16:31.095 fused_ordering(85) 00:16:31.095 fused_ordering(86) 00:16:31.095 fused_ordering(87) 00:16:31.095 fused_ordering(88) 00:16:31.095 fused_ordering(89) 00:16:31.095 fused_ordering(90) 00:16:31.095 fused_ordering(91) 00:16:31.095 fused_ordering(92) 00:16:31.095 fused_ordering(93) 00:16:31.095 fused_ordering(94) 00:16:31.095 fused_ordering(95) 00:16:31.095 fused_ordering(96) 00:16:31.095 fused_ordering(97) 00:16:31.095 fused_ordering(98) 00:16:31.095 fused_ordering(99) 00:16:31.095 fused_ordering(100) 00:16:31.095 fused_ordering(101) 00:16:31.095 fused_ordering(102) 00:16:31.095 fused_ordering(103) 00:16:31.095 fused_ordering(104) 00:16:31.095 fused_ordering(105) 00:16:31.095 fused_ordering(106) 00:16:31.095 fused_ordering(107) 00:16:31.095 fused_ordering(108) 00:16:31.095 fused_ordering(109) 00:16:31.095 fused_ordering(110) 00:16:31.095 fused_ordering(111) 00:16:31.095 fused_ordering(112) 00:16:31.095 fused_ordering(113) 00:16:31.095 fused_ordering(114) 00:16:31.095 fused_ordering(115) 00:16:31.095 fused_ordering(116) 00:16:31.095 fused_ordering(117) 00:16:31.095 fused_ordering(118) 00:16:31.095 fused_ordering(119) 00:16:31.095 fused_ordering(120) 00:16:31.095 fused_ordering(121) 00:16:31.095 fused_ordering(122) 00:16:31.095 fused_ordering(123) 00:16:31.095 fused_ordering(124) 00:16:31.095 fused_ordering(125) 00:16:31.095 fused_ordering(126) 00:16:31.095 fused_ordering(127) 00:16:31.095 fused_ordering(128) 00:16:31.095 fused_ordering(129) 00:16:31.095 fused_ordering(130) 00:16:31.095 fused_ordering(131) 00:16:31.095 fused_ordering(132) 00:16:31.095 fused_ordering(133) 00:16:31.095 fused_ordering(134) 00:16:31.095 fused_ordering(135) 00:16:31.095 fused_ordering(136) 00:16:31.095 fused_ordering(137) 00:16:31.095 fused_ordering(138) 00:16:31.095 fused_ordering(139) 00:16:31.095 fused_ordering(140) 00:16:31.095 fused_ordering(141) 00:16:31.095 fused_ordering(142) 00:16:31.095 fused_ordering(143) 00:16:31.095 fused_ordering(144) 00:16:31.095 fused_ordering(145) 00:16:31.095 fused_ordering(146) 00:16:31.095 fused_ordering(147) 00:16:31.095 fused_ordering(148) 00:16:31.095 fused_ordering(149) 00:16:31.095 fused_ordering(150) 00:16:31.095 fused_ordering(151) 00:16:31.095 fused_ordering(152) 00:16:31.095 fused_ordering(153) 00:16:31.095 fused_ordering(154) 00:16:31.095 fused_ordering(155) 00:16:31.095 fused_ordering(156) 00:16:31.095 fused_ordering(157) 00:16:31.095 fused_ordering(158) 00:16:31.095 fused_ordering(159) 00:16:31.095 fused_ordering(160) 00:16:31.095 fused_ordering(161) 00:16:31.095 fused_ordering(162) 00:16:31.095 fused_ordering(163) 00:16:31.095 fused_ordering(164) 00:16:31.095 fused_ordering(165) 00:16:31.095 fused_ordering(166) 00:16:31.095 fused_ordering(167) 00:16:31.095 fused_ordering(168) 00:16:31.095 fused_ordering(169) 00:16:31.095 fused_ordering(170) 00:16:31.095 fused_ordering(171) 00:16:31.095 fused_ordering(172) 00:16:31.095 fused_ordering(173) 00:16:31.095 fused_ordering(174) 00:16:31.095 fused_ordering(175) 00:16:31.095 fused_ordering(176) 00:16:31.095 fused_ordering(177) 00:16:31.095 fused_ordering(178) 00:16:31.095 fused_ordering(179) 00:16:31.095 fused_ordering(180) 00:16:31.095 fused_ordering(181) 00:16:31.095 fused_ordering(182) 00:16:31.095 fused_ordering(183) 00:16:31.095 fused_ordering(184) 00:16:31.095 fused_ordering(185) 00:16:31.095 fused_ordering(186) 00:16:31.095 fused_ordering(187) 00:16:31.095 fused_ordering(188) 00:16:31.095 fused_ordering(189) 00:16:31.095 fused_ordering(190) 00:16:31.095 fused_ordering(191) 00:16:31.095 fused_ordering(192) 00:16:31.095 fused_ordering(193) 00:16:31.095 fused_ordering(194) 00:16:31.095 fused_ordering(195) 00:16:31.095 fused_ordering(196) 00:16:31.095 fused_ordering(197) 00:16:31.095 fused_ordering(198) 00:16:31.095 fused_ordering(199) 00:16:31.095 fused_ordering(200) 00:16:31.095 fused_ordering(201) 00:16:31.095 fused_ordering(202) 00:16:31.095 fused_ordering(203) 00:16:31.095 fused_ordering(204) 00:16:31.095 fused_ordering(205) 00:16:31.365 fused_ordering(206) 00:16:31.366 fused_ordering(207) 00:16:31.366 fused_ordering(208) 00:16:31.366 fused_ordering(209) 00:16:31.366 fused_ordering(210) 00:16:31.366 fused_ordering(211) 00:16:31.366 fused_ordering(212) 00:16:31.366 fused_ordering(213) 00:16:31.366 fused_ordering(214) 00:16:31.366 fused_ordering(215) 00:16:31.366 fused_ordering(216) 00:16:31.366 fused_ordering(217) 00:16:31.366 fused_ordering(218) 00:16:31.366 fused_ordering(219) 00:16:31.366 fused_ordering(220) 00:16:31.366 fused_ordering(221) 00:16:31.366 fused_ordering(222) 00:16:31.366 fused_ordering(223) 00:16:31.366 fused_ordering(224) 00:16:31.366 fused_ordering(225) 00:16:31.366 fused_ordering(226) 00:16:31.366 fused_ordering(227) 00:16:31.366 fused_ordering(228) 00:16:31.366 fused_ordering(229) 00:16:31.366 fused_ordering(230) 00:16:31.366 fused_ordering(231) 00:16:31.366 fused_ordering(232) 00:16:31.366 fused_ordering(233) 00:16:31.366 fused_ordering(234) 00:16:31.366 fused_ordering(235) 00:16:31.366 fused_ordering(236) 00:16:31.366 fused_ordering(237) 00:16:31.366 fused_ordering(238) 00:16:31.366 fused_ordering(239) 00:16:31.366 fused_ordering(240) 00:16:31.366 fused_ordering(241) 00:16:31.366 fused_ordering(242) 00:16:31.366 fused_ordering(243) 00:16:31.366 fused_ordering(244) 00:16:31.366 fused_ordering(245) 00:16:31.366 fused_ordering(246) 00:16:31.366 fused_ordering(247) 00:16:31.366 fused_ordering(248) 00:16:31.366 fused_ordering(249) 00:16:31.366 fused_ordering(250) 00:16:31.366 fused_ordering(251) 00:16:31.366 fused_ordering(252) 00:16:31.366 fused_ordering(253) 00:16:31.366 fused_ordering(254) 00:16:31.366 fused_ordering(255) 00:16:31.366 fused_ordering(256) 00:16:31.366 fused_ordering(257) 00:16:31.366 fused_ordering(258) 00:16:31.366 fused_ordering(259) 00:16:31.366 fused_ordering(260) 00:16:31.366 fused_ordering(261) 00:16:31.366 fused_ordering(262) 00:16:31.366 fused_ordering(263) 00:16:31.366 fused_ordering(264) 00:16:31.366 fused_ordering(265) 00:16:31.366 fused_ordering(266) 00:16:31.366 fused_ordering(267) 00:16:31.366 fused_ordering(268) 00:16:31.366 fused_ordering(269) 00:16:31.366 fused_ordering(270) 00:16:31.366 fused_ordering(271) 00:16:31.366 fused_ordering(272) 00:16:31.366 fused_ordering(273) 00:16:31.366 fused_ordering(274) 00:16:31.366 fused_ordering(275) 00:16:31.366 fused_ordering(276) 00:16:31.366 fused_ordering(277) 00:16:31.366 fused_ordering(278) 00:16:31.366 fused_ordering(279) 00:16:31.366 fused_ordering(280) 00:16:31.366 fused_ordering(281) 00:16:31.366 fused_ordering(282) 00:16:31.366 fused_ordering(283) 00:16:31.366 fused_ordering(284) 00:16:31.366 fused_ordering(285) 00:16:31.366 fused_ordering(286) 00:16:31.366 fused_ordering(287) 00:16:31.366 fused_ordering(288) 00:16:31.366 fused_ordering(289) 00:16:31.366 fused_ordering(290) 00:16:31.366 fused_ordering(291) 00:16:31.366 fused_ordering(292) 00:16:31.366 fused_ordering(293) 00:16:31.366 fused_ordering(294) 00:16:31.366 fused_ordering(295) 00:16:31.366 fused_ordering(296) 00:16:31.366 fused_ordering(297) 00:16:31.366 fused_ordering(298) 00:16:31.366 fused_ordering(299) 00:16:31.366 fused_ordering(300) 00:16:31.366 fused_ordering(301) 00:16:31.366 fused_ordering(302) 00:16:31.366 fused_ordering(303) 00:16:31.366 fused_ordering(304) 00:16:31.366 fused_ordering(305) 00:16:31.366 fused_ordering(306) 00:16:31.366 fused_ordering(307) 00:16:31.366 fused_ordering(308) 00:16:31.366 fused_ordering(309) 00:16:31.366 fused_ordering(310) 00:16:31.366 fused_ordering(311) 00:16:31.366 fused_ordering(312) 00:16:31.366 fused_ordering(313) 00:16:31.366 fused_ordering(314) 00:16:31.366 fused_ordering(315) 00:16:31.366 fused_ordering(316) 00:16:31.366 fused_ordering(317) 00:16:31.366 fused_ordering(318) 00:16:31.366 fused_ordering(319) 00:16:31.366 fused_ordering(320) 00:16:31.366 fused_ordering(321) 00:16:31.366 fused_ordering(322) 00:16:31.366 fused_ordering(323) 00:16:31.366 fused_ordering(324) 00:16:31.366 fused_ordering(325) 00:16:31.366 fused_ordering(326) 00:16:31.366 fused_ordering(327) 00:16:31.366 fused_ordering(328) 00:16:31.366 fused_ordering(329) 00:16:31.366 fused_ordering(330) 00:16:31.366 fused_ordering(331) 00:16:31.366 fused_ordering(332) 00:16:31.367 fused_ordering(333) 00:16:31.367 fused_ordering(334) 00:16:31.367 fused_ordering(335) 00:16:31.367 fused_ordering(336) 00:16:31.367 fused_ordering(337) 00:16:31.367 fused_ordering(338) 00:16:31.367 fused_ordering(339) 00:16:31.367 fused_ordering(340) 00:16:31.367 fused_ordering(341) 00:16:31.367 fused_ordering(342) 00:16:31.367 fused_ordering(343) 00:16:31.367 fused_ordering(344) 00:16:31.367 fused_ordering(345) 00:16:31.367 fused_ordering(346) 00:16:31.367 fused_ordering(347) 00:16:31.367 fused_ordering(348) 00:16:31.367 fused_ordering(349) 00:16:31.367 fused_ordering(350) 00:16:31.367 fused_ordering(351) 00:16:31.367 fused_ordering(352) 00:16:31.367 fused_ordering(353) 00:16:31.367 fused_ordering(354) 00:16:31.367 fused_ordering(355) 00:16:31.367 fused_ordering(356) 00:16:31.367 fused_ordering(357) 00:16:31.367 fused_ordering(358) 00:16:31.367 fused_ordering(359) 00:16:31.367 fused_ordering(360) 00:16:31.367 fused_ordering(361) 00:16:31.367 fused_ordering(362) 00:16:31.367 fused_ordering(363) 00:16:31.367 fused_ordering(364) 00:16:31.367 fused_ordering(365) 00:16:31.367 fused_ordering(366) 00:16:31.367 fused_ordering(367) 00:16:31.367 fused_ordering(368) 00:16:31.367 fused_ordering(369) 00:16:31.367 fused_ordering(370) 00:16:31.367 fused_ordering(371) 00:16:31.367 fused_ordering(372) 00:16:31.367 fused_ordering(373) 00:16:31.367 fused_ordering(374) 00:16:31.367 fused_ordering(375) 00:16:31.367 fused_ordering(376) 00:16:31.367 fused_ordering(377) 00:16:31.367 fused_ordering(378) 00:16:31.367 fused_ordering(379) 00:16:31.367 fused_ordering(380) 00:16:31.367 fused_ordering(381) 00:16:31.367 fused_ordering(382) 00:16:31.367 fused_ordering(383) 00:16:31.367 fused_ordering(384) 00:16:31.367 fused_ordering(385) 00:16:31.367 fused_ordering(386) 00:16:31.367 fused_ordering(387) 00:16:31.367 fused_ordering(388) 00:16:31.367 fused_ordering(389) 00:16:31.367 fused_ordering(390) 00:16:31.367 fused_ordering(391) 00:16:31.367 fused_ordering(392) 00:16:31.367 fused_ordering(393) 00:16:31.367 fused_ordering(394) 00:16:31.367 fused_ordering(395) 00:16:31.367 fused_ordering(396) 00:16:31.367 fused_ordering(397) 00:16:31.367 fused_ordering(398) 00:16:31.367 fused_ordering(399) 00:16:31.367 fused_ordering(400) 00:16:31.367 fused_ordering(401) 00:16:31.367 fused_ordering(402) 00:16:31.367 fused_ordering(403) 00:16:31.367 fused_ordering(404) 00:16:31.367 fused_ordering(405) 00:16:31.367 fused_ordering(406) 00:16:31.367 fused_ordering(407) 00:16:31.367 fused_ordering(408) 00:16:31.367 fused_ordering(409) 00:16:31.367 fused_ordering(410) 00:16:31.629 fused_ordering(411) 00:16:31.629 fused_ordering(412) 00:16:31.629 fused_ordering(413) 00:16:31.629 fused_ordering(414) 00:16:31.629 fused_ordering(415) 00:16:31.629 fused_ordering(416) 00:16:31.629 fused_ordering(417) 00:16:31.629 fused_ordering(418) 00:16:31.629 fused_ordering(419) 00:16:31.629 fused_ordering(420) 00:16:31.629 fused_ordering(421) 00:16:31.629 fused_ordering(422) 00:16:31.629 fused_ordering(423) 00:16:31.629 fused_ordering(424) 00:16:31.629 fused_ordering(425) 00:16:31.629 fused_ordering(426) 00:16:31.629 fused_ordering(427) 00:16:31.629 fused_ordering(428) 00:16:31.629 fused_ordering(429) 00:16:31.629 fused_ordering(430) 00:16:31.629 fused_ordering(431) 00:16:31.629 fused_ordering(432) 00:16:31.629 fused_ordering(433) 00:16:31.629 fused_ordering(434) 00:16:31.629 fused_ordering(435) 00:16:31.629 fused_ordering(436) 00:16:31.629 fused_ordering(437) 00:16:31.629 fused_ordering(438) 00:16:31.629 fused_ordering(439) 00:16:31.629 fused_ordering(440) 00:16:31.629 fused_ordering(441) 00:16:31.629 fused_ordering(442) 00:16:31.629 fused_ordering(443) 00:16:31.629 fused_ordering(444) 00:16:31.629 fused_ordering(445) 00:16:31.629 fused_ordering(446) 00:16:31.629 fused_ordering(447) 00:16:31.629 fused_ordering(448) 00:16:31.629 fused_ordering(449) 00:16:31.629 fused_ordering(450) 00:16:31.629 fused_ordering(451) 00:16:31.629 fused_ordering(452) 00:16:31.629 fused_ordering(453) 00:16:31.629 fused_ordering(454) 00:16:31.629 fused_ordering(455) 00:16:31.629 fused_ordering(456) 00:16:31.629 fused_ordering(457) 00:16:31.629 fused_ordering(458) 00:16:31.629 fused_ordering(459) 00:16:31.629 fused_ordering(460) 00:16:31.629 fused_ordering(461) 00:16:31.629 fused_ordering(462) 00:16:31.629 fused_ordering(463) 00:16:31.629 fused_ordering(464) 00:16:31.629 fused_ordering(465) 00:16:31.629 fused_ordering(466) 00:16:31.629 fused_ordering(467) 00:16:31.629 fused_ordering(468) 00:16:31.629 fused_ordering(469) 00:16:31.629 fused_ordering(470) 00:16:31.629 fused_ordering(471) 00:16:31.629 fused_ordering(472) 00:16:31.629 fused_ordering(473) 00:16:31.629 fused_ordering(474) 00:16:31.629 fused_ordering(475) 00:16:31.629 fused_ordering(476) 00:16:31.629 fused_ordering(477) 00:16:31.629 fused_ordering(478) 00:16:31.629 fused_ordering(479) 00:16:31.629 fused_ordering(480) 00:16:31.629 fused_ordering(481) 00:16:31.629 fused_ordering(482) 00:16:31.629 fused_ordering(483) 00:16:31.629 fused_ordering(484) 00:16:31.629 fused_ordering(485) 00:16:31.629 fused_ordering(486) 00:16:31.629 fused_ordering(487) 00:16:31.629 fused_ordering(488) 00:16:31.629 fused_ordering(489) 00:16:31.629 fused_ordering(490) 00:16:31.629 fused_ordering(491) 00:16:31.629 fused_ordering(492) 00:16:31.629 fused_ordering(493) 00:16:31.629 fused_ordering(494) 00:16:31.629 fused_ordering(495) 00:16:31.629 fused_ordering(496) 00:16:31.629 fused_ordering(497) 00:16:31.629 fused_ordering(498) 00:16:31.629 fused_ordering(499) 00:16:31.629 fused_ordering(500) 00:16:31.629 fused_ordering(501) 00:16:31.629 fused_ordering(502) 00:16:31.629 fused_ordering(503) 00:16:31.629 fused_ordering(504) 00:16:31.629 fused_ordering(505) 00:16:31.629 fused_ordering(506) 00:16:31.629 fused_ordering(507) 00:16:31.629 fused_ordering(508) 00:16:31.629 fused_ordering(509) 00:16:31.629 fused_ordering(510) 00:16:31.629 fused_ordering(511) 00:16:31.629 fused_ordering(512) 00:16:31.630 fused_ordering(513) 00:16:31.630 fused_ordering(514) 00:16:31.630 fused_ordering(515) 00:16:31.630 fused_ordering(516) 00:16:31.630 fused_ordering(517) 00:16:31.630 fused_ordering(518) 00:16:31.630 fused_ordering(519) 00:16:31.630 fused_ordering(520) 00:16:31.630 fused_ordering(521) 00:16:31.630 fused_ordering(522) 00:16:31.630 fused_ordering(523) 00:16:31.630 fused_ordering(524) 00:16:31.630 fused_ordering(525) 00:16:31.630 fused_ordering(526) 00:16:31.630 fused_ordering(527) 00:16:31.630 fused_ordering(528) 00:16:31.630 fused_ordering(529) 00:16:31.630 fused_ordering(530) 00:16:31.630 fused_ordering(531) 00:16:31.630 fused_ordering(532) 00:16:31.630 fused_ordering(533) 00:16:31.630 fused_ordering(534) 00:16:31.630 fused_ordering(535) 00:16:31.630 fused_ordering(536) 00:16:31.630 fused_ordering(537) 00:16:31.630 fused_ordering(538) 00:16:31.630 fused_ordering(539) 00:16:31.630 fused_ordering(540) 00:16:31.630 fused_ordering(541) 00:16:31.630 fused_ordering(542) 00:16:31.630 fused_ordering(543) 00:16:31.630 fused_ordering(544) 00:16:31.630 fused_ordering(545) 00:16:31.630 fused_ordering(546) 00:16:31.630 fused_ordering(547) 00:16:31.630 fused_ordering(548) 00:16:31.630 fused_ordering(549) 00:16:31.630 fused_ordering(550) 00:16:31.630 fused_ordering(551) 00:16:31.630 fused_ordering(552) 00:16:31.630 fused_ordering(553) 00:16:31.630 fused_ordering(554) 00:16:31.630 fused_ordering(555) 00:16:31.630 fused_ordering(556) 00:16:31.630 fused_ordering(557) 00:16:31.630 fused_ordering(558) 00:16:31.630 fused_ordering(559) 00:16:31.630 fused_ordering(560) 00:16:31.630 fused_ordering(561) 00:16:31.630 fused_ordering(562) 00:16:31.630 fused_ordering(563) 00:16:31.630 fused_ordering(564) 00:16:31.630 fused_ordering(565) 00:16:31.630 fused_ordering(566) 00:16:31.630 fused_ordering(567) 00:16:31.630 fused_ordering(568) 00:16:31.630 fused_ordering(569) 00:16:31.630 fused_ordering(570) 00:16:31.630 fused_ordering(571) 00:16:31.630 fused_ordering(572) 00:16:31.630 fused_ordering(573) 00:16:31.630 fused_ordering(574) 00:16:31.630 fused_ordering(575) 00:16:31.630 fused_ordering(576) 00:16:31.630 fused_ordering(577) 00:16:31.630 fused_ordering(578) 00:16:31.630 fused_ordering(579) 00:16:31.630 fused_ordering(580) 00:16:31.630 fused_ordering(581) 00:16:31.630 fused_ordering(582) 00:16:31.630 fused_ordering(583) 00:16:31.630 fused_ordering(584) 00:16:31.630 fused_ordering(585) 00:16:31.630 fused_ordering(586) 00:16:31.630 fused_ordering(587) 00:16:31.630 fused_ordering(588) 00:16:31.630 fused_ordering(589) 00:16:31.630 fused_ordering(590) 00:16:31.630 fused_ordering(591) 00:16:31.630 fused_ordering(592) 00:16:31.630 fused_ordering(593) 00:16:31.630 fused_ordering(594) 00:16:31.630 fused_ordering(595) 00:16:31.630 fused_ordering(596) 00:16:31.630 fused_ordering(597) 00:16:31.630 fused_ordering(598) 00:16:31.630 fused_ordering(599) 00:16:31.630 fused_ordering(600) 00:16:31.630 fused_ordering(601) 00:16:31.630 fused_ordering(602) 00:16:31.630 fused_ordering(603) 00:16:31.630 fused_ordering(604) 00:16:31.630 fused_ordering(605) 00:16:31.630 fused_ordering(606) 00:16:31.630 fused_ordering(607) 00:16:31.630 fused_ordering(608) 00:16:31.630 fused_ordering(609) 00:16:31.630 fused_ordering(610) 00:16:31.630 fused_ordering(611) 00:16:31.630 fused_ordering(612) 00:16:31.630 fused_ordering(613) 00:16:31.630 fused_ordering(614) 00:16:31.630 fused_ordering(615) 00:16:32.196 fused_ordering(616) 00:16:32.196 fused_ordering(617) 00:16:32.196 fused_ordering(618) 00:16:32.196 fused_ordering(619) 00:16:32.196 fused_ordering(620) 00:16:32.196 fused_ordering(621) 00:16:32.196 fused_ordering(622) 00:16:32.196 fused_ordering(623) 00:16:32.196 fused_ordering(624) 00:16:32.196 fused_ordering(625) 00:16:32.196 fused_ordering(626) 00:16:32.196 fused_ordering(627) 00:16:32.196 fused_ordering(628) 00:16:32.196 fused_ordering(629) 00:16:32.196 fused_ordering(630) 00:16:32.196 fused_ordering(631) 00:16:32.196 fused_ordering(632) 00:16:32.196 fused_ordering(633) 00:16:32.196 fused_ordering(634) 00:16:32.196 fused_ordering(635) 00:16:32.196 fused_ordering(636) 00:16:32.196 fused_ordering(637) 00:16:32.196 fused_ordering(638) 00:16:32.196 fused_ordering(639) 00:16:32.196 fused_ordering(640) 00:16:32.196 fused_ordering(641) 00:16:32.196 fused_ordering(642) 00:16:32.196 fused_ordering(643) 00:16:32.196 fused_ordering(644) 00:16:32.196 fused_ordering(645) 00:16:32.196 fused_ordering(646) 00:16:32.196 fused_ordering(647) 00:16:32.196 fused_ordering(648) 00:16:32.196 fused_ordering(649) 00:16:32.196 fused_ordering(650) 00:16:32.196 fused_ordering(651) 00:16:32.196 fused_ordering(652) 00:16:32.196 fused_ordering(653) 00:16:32.196 fused_ordering(654) 00:16:32.196 fused_ordering(655) 00:16:32.196 fused_ordering(656) 00:16:32.196 fused_ordering(657) 00:16:32.196 fused_ordering(658) 00:16:32.196 fused_ordering(659) 00:16:32.196 fused_ordering(660) 00:16:32.196 fused_ordering(661) 00:16:32.196 fused_ordering(662) 00:16:32.196 fused_ordering(663) 00:16:32.196 fused_ordering(664) 00:16:32.196 fused_ordering(665) 00:16:32.196 fused_ordering(666) 00:16:32.196 fused_ordering(667) 00:16:32.196 fused_ordering(668) 00:16:32.196 fused_ordering(669) 00:16:32.196 fused_ordering(670) 00:16:32.196 fused_ordering(671) 00:16:32.196 fused_ordering(672) 00:16:32.196 fused_ordering(673) 00:16:32.196 fused_ordering(674) 00:16:32.196 fused_ordering(675) 00:16:32.196 fused_ordering(676) 00:16:32.196 fused_ordering(677) 00:16:32.196 fused_ordering(678) 00:16:32.196 fused_ordering(679) 00:16:32.196 fused_ordering(680) 00:16:32.196 fused_ordering(681) 00:16:32.196 fused_ordering(682) 00:16:32.196 fused_ordering(683) 00:16:32.196 fused_ordering(684) 00:16:32.196 fused_ordering(685) 00:16:32.196 fused_ordering(686) 00:16:32.196 fused_ordering(687) 00:16:32.196 fused_ordering(688) 00:16:32.196 fused_ordering(689) 00:16:32.196 fused_ordering(690) 00:16:32.196 fused_ordering(691) 00:16:32.196 fused_ordering(692) 00:16:32.196 fused_ordering(693) 00:16:32.196 fused_ordering(694) 00:16:32.196 fused_ordering(695) 00:16:32.196 fused_ordering(696) 00:16:32.196 fused_ordering(697) 00:16:32.196 fused_ordering(698) 00:16:32.196 fused_ordering(699) 00:16:32.196 fused_ordering(700) 00:16:32.196 fused_ordering(701) 00:16:32.196 fused_ordering(702) 00:16:32.196 fused_ordering(703) 00:16:32.196 fused_ordering(704) 00:16:32.196 fused_ordering(705) 00:16:32.196 fused_ordering(706) 00:16:32.196 fused_ordering(707) 00:16:32.196 fused_ordering(708) 00:16:32.196 fused_ordering(709) 00:16:32.196 fused_ordering(710) 00:16:32.196 fused_ordering(711) 00:16:32.196 fused_ordering(712) 00:16:32.196 fused_ordering(713) 00:16:32.196 fused_ordering(714) 00:16:32.196 fused_ordering(715) 00:16:32.196 fused_ordering(716) 00:16:32.196 fused_ordering(717) 00:16:32.196 fused_ordering(718) 00:16:32.196 fused_ordering(719) 00:16:32.196 fused_ordering(720) 00:16:32.196 fused_ordering(721) 00:16:32.196 fused_ordering(722) 00:16:32.196 fused_ordering(723) 00:16:32.196 fused_ordering(724) 00:16:32.196 fused_ordering(725) 00:16:32.196 fused_ordering(726) 00:16:32.196 fused_ordering(727) 00:16:32.197 fused_ordering(728) 00:16:32.197 fused_ordering(729) 00:16:32.197 fused_ordering(730) 00:16:32.197 fused_ordering(731) 00:16:32.197 fused_ordering(732) 00:16:32.197 fused_ordering(733) 00:16:32.197 fused_ordering(734) 00:16:32.197 fused_ordering(735) 00:16:32.197 fused_ordering(736) 00:16:32.197 fused_ordering(737) 00:16:32.197 fused_ordering(738) 00:16:32.197 fused_ordering(739) 00:16:32.197 fused_ordering(740) 00:16:32.197 fused_ordering(741) 00:16:32.197 fused_ordering(742) 00:16:32.197 fused_ordering(743) 00:16:32.197 fused_ordering(744) 00:16:32.197 fused_ordering(745) 00:16:32.197 fused_ordering(746) 00:16:32.197 fused_ordering(747) 00:16:32.197 fused_ordering(748) 00:16:32.197 fused_ordering(749) 00:16:32.197 fused_ordering(750) 00:16:32.197 fused_ordering(751) 00:16:32.197 fused_ordering(752) 00:16:32.197 fused_ordering(753) 00:16:32.197 fused_ordering(754) 00:16:32.197 fused_ordering(755) 00:16:32.197 fused_ordering(756) 00:16:32.197 fused_ordering(757) 00:16:32.197 fused_ordering(758) 00:16:32.197 fused_ordering(759) 00:16:32.197 fused_ordering(760) 00:16:32.197 fused_ordering(761) 00:16:32.197 fused_ordering(762) 00:16:32.197 fused_ordering(763) 00:16:32.197 fused_ordering(764) 00:16:32.197 fused_ordering(765) 00:16:32.197 fused_ordering(766) 00:16:32.197 fused_ordering(767) 00:16:32.197 fused_ordering(768) 00:16:32.197 fused_ordering(769) 00:16:32.197 fused_ordering(770) 00:16:32.197 fused_ordering(771) 00:16:32.197 fused_ordering(772) 00:16:32.197 fused_ordering(773) 00:16:32.197 fused_ordering(774) 00:16:32.197 fused_ordering(775) 00:16:32.197 fused_ordering(776) 00:16:32.197 fused_ordering(777) 00:16:32.197 fused_ordering(778) 00:16:32.197 fused_ordering(779) 00:16:32.197 fused_ordering(780) 00:16:32.197 fused_ordering(781) 00:16:32.197 fused_ordering(782) 00:16:32.197 fused_ordering(783) 00:16:32.197 fused_ordering(784) 00:16:32.197 fused_ordering(785) 00:16:32.197 fused_ordering(786) 00:16:32.197 fused_ordering(787) 00:16:32.197 fused_ordering(788) 00:16:32.197 fused_ordering(789) 00:16:32.197 fused_ordering(790) 00:16:32.197 fused_ordering(791) 00:16:32.197 fused_ordering(792) 00:16:32.197 fused_ordering(793) 00:16:32.197 fused_ordering(794) 00:16:32.197 fused_ordering(795) 00:16:32.197 fused_ordering(796) 00:16:32.197 fused_ordering(797) 00:16:32.197 fused_ordering(798) 00:16:32.197 fused_ordering(799) 00:16:32.197 fused_ordering(800) 00:16:32.197 fused_ordering(801) 00:16:32.197 fused_ordering(802) 00:16:32.197 fused_ordering(803) 00:16:32.197 fused_ordering(804) 00:16:32.197 fused_ordering(805) 00:16:32.197 fused_ordering(806) 00:16:32.197 fused_ordering(807) 00:16:32.197 fused_ordering(808) 00:16:32.197 fused_ordering(809) 00:16:32.197 fused_ordering(810) 00:16:32.197 fused_ordering(811) 00:16:32.197 fused_ordering(812) 00:16:32.197 fused_ordering(813) 00:16:32.197 fused_ordering(814) 00:16:32.197 fused_ordering(815) 00:16:32.197 fused_ordering(816) 00:16:32.197 fused_ordering(817) 00:16:32.197 fused_ordering(818) 00:16:32.197 fused_ordering(819) 00:16:32.197 fused_ordering(820) 00:16:32.764 fused_ordering(821) 00:16:32.764 fused_ordering(822) 00:16:32.764 fused_ordering(823) 00:16:32.764 fused_ordering(824) 00:16:32.764 fused_ordering(825) 00:16:32.764 fused_ordering(826) 00:16:32.764 fused_ordering(827) 00:16:32.764 fused_ordering(828) 00:16:32.764 fused_ordering(829) 00:16:32.764 fused_ordering(830) 00:16:32.764 fused_ordering(831) 00:16:32.764 fused_ordering(832) 00:16:32.764 fused_ordering(833) 00:16:32.764 fused_ordering(834) 00:16:32.764 fused_ordering(835) 00:16:32.764 fused_ordering(836) 00:16:32.764 fused_ordering(837) 00:16:32.764 fused_ordering(838) 00:16:32.764 fused_ordering(839) 00:16:32.764 fused_ordering(840) 00:16:32.764 fused_ordering(841) 00:16:32.764 fused_ordering(842) 00:16:32.764 fused_ordering(843) 00:16:32.764 fused_ordering(844) 00:16:32.764 fused_ordering(845) 00:16:32.764 fused_ordering(846) 00:16:32.764 fused_ordering(847) 00:16:32.764 fused_ordering(848) 00:16:32.764 fused_ordering(849) 00:16:32.764 fused_ordering(850) 00:16:32.764 fused_ordering(851) 00:16:32.764 fused_ordering(852) 00:16:32.764 fused_ordering(853) 00:16:32.764 fused_ordering(854) 00:16:32.764 fused_ordering(855) 00:16:32.764 fused_ordering(856) 00:16:32.764 fused_ordering(857) 00:16:32.764 fused_ordering(858) 00:16:32.764 fused_ordering(859) 00:16:32.764 fused_ordering(860) 00:16:32.764 fused_ordering(861) 00:16:32.764 fused_ordering(862) 00:16:32.764 fused_ordering(863) 00:16:32.764 fused_ordering(864) 00:16:32.764 fused_ordering(865) 00:16:32.764 fused_ordering(866) 00:16:32.764 fused_ordering(867) 00:16:32.764 fused_ordering(868) 00:16:32.764 fused_ordering(869) 00:16:32.764 fused_ordering(870) 00:16:32.764 fused_ordering(871) 00:16:32.764 fused_ordering(872) 00:16:32.764 fused_ordering(873) 00:16:32.764 fused_ordering(874) 00:16:32.764 fused_ordering(875) 00:16:32.764 fused_ordering(876) 00:16:32.764 fused_ordering(877) 00:16:32.764 fused_ordering(878) 00:16:32.764 fused_ordering(879) 00:16:32.764 fused_ordering(880) 00:16:32.764 fused_ordering(881) 00:16:32.764 fused_ordering(882) 00:16:32.764 fused_ordering(883) 00:16:32.764 fused_ordering(884) 00:16:32.764 fused_ordering(885) 00:16:32.764 fused_ordering(886) 00:16:32.764 fused_ordering(887) 00:16:32.764 fused_ordering(888) 00:16:32.764 fused_ordering(889) 00:16:32.764 fused_ordering(890) 00:16:32.764 fused_ordering(891) 00:16:32.764 fused_ordering(892) 00:16:32.764 fused_ordering(893) 00:16:32.764 fused_ordering(894) 00:16:32.764 fused_ordering(895) 00:16:32.764 fused_ordering(896) 00:16:32.764 fused_ordering(897) 00:16:32.764 fused_ordering(898) 00:16:32.764 fused_ordering(899) 00:16:32.764 fused_ordering(900) 00:16:32.764 fused_ordering(901) 00:16:32.764 fused_ordering(902) 00:16:32.764 fused_ordering(903) 00:16:32.764 fused_ordering(904) 00:16:32.764 fused_ordering(905) 00:16:32.764 fused_ordering(906) 00:16:32.764 fused_ordering(907) 00:16:32.764 fused_ordering(908) 00:16:32.764 fused_ordering(909) 00:16:32.764 fused_ordering(910) 00:16:32.764 fused_ordering(911) 00:16:32.764 fused_ordering(912) 00:16:32.764 fused_ordering(913) 00:16:32.764 fused_ordering(914) 00:16:32.764 fused_ordering(915) 00:16:32.764 fused_ordering(916) 00:16:32.764 fused_ordering(917) 00:16:32.764 fused_ordering(918) 00:16:32.764 fused_ordering(919) 00:16:32.764 fused_ordering(920) 00:16:32.764 fused_ordering(921) 00:16:32.764 fused_ordering(922) 00:16:32.764 fused_ordering(923) 00:16:32.764 fused_ordering(924) 00:16:32.764 fused_ordering(925) 00:16:32.764 fused_ordering(926) 00:16:32.764 fused_ordering(927) 00:16:32.764 fused_ordering(928) 00:16:32.764 fused_ordering(929) 00:16:32.764 fused_ordering(930) 00:16:32.764 fused_ordering(931) 00:16:32.764 fused_ordering(932) 00:16:32.764 fused_ordering(933) 00:16:32.764 fused_ordering(934) 00:16:32.764 fused_ordering(935) 00:16:32.764 fused_ordering(936) 00:16:32.764 fused_ordering(937) 00:16:32.764 fused_ordering(938) 00:16:32.765 fused_ordering(939) 00:16:32.765 fused_ordering(940) 00:16:32.765 fused_ordering(941) 00:16:32.765 fused_ordering(942) 00:16:32.765 fused_ordering(943) 00:16:32.765 fused_ordering(944) 00:16:32.765 fused_ordering(945) 00:16:32.765 fused_ordering(946) 00:16:32.765 fused_ordering(947) 00:16:32.765 fused_ordering(948) 00:16:32.765 fused_ordering(949) 00:16:32.765 fused_ordering(950) 00:16:32.765 fused_ordering(951) 00:16:32.765 fused_ordering(952) 00:16:32.765 fused_ordering(953) 00:16:32.765 fused_ordering(954) 00:16:32.765 fused_ordering(955) 00:16:32.765 fused_ordering(956) 00:16:32.765 fused_ordering(957) 00:16:32.765 fused_ordering(958) 00:16:32.765 fused_ordering(959) 00:16:32.765 fused_ordering(960) 00:16:32.765 fused_ordering(961) 00:16:32.765 fused_ordering(962) 00:16:32.765 fused_ordering(963) 00:16:32.765 fused_ordering(964) 00:16:32.765 fused_ordering(965) 00:16:32.765 fused_ordering(966) 00:16:32.765 fused_ordering(967) 00:16:32.765 fused_ordering(968) 00:16:32.765 fused_ordering(969) 00:16:32.765 fused_ordering(970) 00:16:32.765 fused_ordering(971) 00:16:32.765 fused_ordering(972) 00:16:32.765 fused_ordering(973) 00:16:32.765 fused_ordering(974) 00:16:32.765 fused_ordering(975) 00:16:32.765 fused_ordering(976) 00:16:32.765 fused_ordering(977) 00:16:32.765 fused_ordering(978) 00:16:32.765 fused_ordering(979) 00:16:32.765 fused_ordering(980) 00:16:32.765 fused_ordering(981) 00:16:32.765 fused_ordering(982) 00:16:32.765 fused_ordering(983) 00:16:32.765 fused_ordering(984) 00:16:32.765 fused_ordering(985) 00:16:32.765 fused_ordering(986) 00:16:32.765 fused_ordering(987) 00:16:32.765 fused_ordering(988) 00:16:32.765 fused_ordering(989) 00:16:32.765 fused_ordering(990) 00:16:32.765 fused_ordering(991) 00:16:32.765 fused_ordering(992) 00:16:32.765 fused_ordering(993) 00:16:32.765 fused_ordering(994) 00:16:32.765 fused_ordering(995) 00:16:32.765 fused_ordering(996) 00:16:32.765 fused_ordering(997) 00:16:32.765 fused_ordering(998) 00:16:32.765 fused_ordering(999) 00:16:32.765 fused_ordering(1000) 00:16:32.765 fused_ordering(1001) 00:16:32.765 fused_ordering(1002) 00:16:32.765 fused_ordering(1003) 00:16:32.765 fused_ordering(1004) 00:16:32.765 fused_ordering(1005) 00:16:32.765 fused_ordering(1006) 00:16:32.765 fused_ordering(1007) 00:16:32.765 fused_ordering(1008) 00:16:32.765 fused_ordering(1009) 00:16:32.765 fused_ordering(1010) 00:16:32.765 fused_ordering(1011) 00:16:32.765 fused_ordering(1012) 00:16:32.765 fused_ordering(1013) 00:16:32.765 fused_ordering(1014) 00:16:32.765 fused_ordering(1015) 00:16:32.765 fused_ordering(1016) 00:16:32.765 fused_ordering(1017) 00:16:32.765 fused_ordering(1018) 00:16:32.765 fused_ordering(1019) 00:16:32.765 fused_ordering(1020) 00:16:32.765 fused_ordering(1021) 00:16:32.765 fused_ordering(1022) 00:16:32.765 fused_ordering(1023) 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:32.765 rmmod nvme_tcp 00:16:32.765 rmmod nvme_fabrics 00:16:32.765 rmmod nvme_keyring 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2364413 ']' 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2364413 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2364413 ']' 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2364413 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2364413 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2364413' 00:16:32.765 killing process with pid 2364413 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2364413 00:16:32.765 23:20:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2364413 00:16:34.144 23:20:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:34.144 23:20:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:34.144 23:20:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:34.144 23:20:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:34.144 23:20:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:34.144 23:20:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.144 23:20:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.144 23:20:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.687 23:20:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:36.687 00:16:36.687 real 0m11.573s 00:16:36.687 user 0m6.763s 00:16:36.687 sys 0m5.446s 00:16:36.687 23:20:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:36.687 23:20:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:36.687 ************************************ 00:16:36.687 END TEST nvmf_fused_ordering 00:16:36.687 ************************************ 00:16:36.687 23:20:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:36.687 23:20:45 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:36.687 23:20:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:36.687 23:20:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.687 23:20:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:36.687 ************************************ 00:16:36.687 START TEST nvmf_delete_subsystem 00:16:36.687 ************************************ 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:36.687 * Looking for test storage... 00:16:36.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:16:36.687 23:20:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:41.963 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:41.963 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:16:41.963 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:41.963 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:41.963 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:41.963 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:41.963 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:41.963 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:41.964 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:41.964 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:41.964 Found net devices under 0000:86:00.0: cvl_0_0 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:41.964 Found net devices under 0000:86:00.1: cvl_0_1 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:41.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:16:41.964 00:16:41.964 --- 10.0.0.2 ping statistics --- 00:16:41.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.964 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:16:41.964 00:16:41.964 --- 10.0.0.1 ping statistics --- 00:16:41.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.964 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2368629 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2368629 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2368629 ']' 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.964 23:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:41.964 [2024-07-10 23:20:50.714034] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:16:41.964 [2024-07-10 23:20:50.714125] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.964 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.964 [2024-07-10 23:20:50.821509] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:41.964 [2024-07-10 23:20:51.029712] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.965 [2024-07-10 23:20:51.029757] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.965 [2024-07-10 23:20:51.029771] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.965 [2024-07-10 23:20:51.029780] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.965 [2024-07-10 23:20:51.029789] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.965 [2024-07-10 23:20:51.029856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.965 [2024-07-10 23:20:51.029870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.545 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.545 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:16:42.545 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:42.545 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:42.545 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:42.545 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.545 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:42.545 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.545 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:42.545 [2024-07-10 23:20:51.549655] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:42.545 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.545 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:42.545 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.545 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:42.545 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:42.546 [2024-07-10 23:20:51.569837] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:42.546 NULL1 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:42.546 Delay0 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2368875 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:42.546 23:20:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:42.845 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.845 [2024-07-10 23:20:51.681928] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:44.749 23:20:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:44.749 23:20:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.749 23:20:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 [2024-07-10 23:20:53.830428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(5) to be set 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 [2024-07-10 23:20:53.831028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(5) to be set 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 starting I/O failed: -6 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Write completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.008 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Write completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Write completed with error (sct=0, sc=8) 00:16:45.009 Write completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Write completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Write completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Write completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Write completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Write completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Write completed with error (sct=0, sc=8) 00:16:45.009 Write completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 Read completed with error (sct=0, sc=8) 00:16:45.009 [2024-07-10 23:20:53.832844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(5) to be set 00:16:45.945 [2024-07-10 23:20:54.778443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001de00 is same with the state(5) to be set 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 [2024-07-10 23:20:54.833232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ea80 is same with the state(5) to be set 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 [2024-07-10 23:20:54.834010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e580 is same with the state(5) to be set 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 [2024-07-10 23:20:54.834681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e300 is same with the state(5) to be set 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Write completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 Read completed with error (sct=0, sc=8) 00:16:45.945 [2024-07-10 23:20:54.840110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(5) to be set 00:16:45.945 23:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.945 23:20:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:16:45.945 23:20:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2368875 00:16:45.945 23:20:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:45.945 Initializing NVMe Controllers 00:16:45.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:45.945 Controller IO queue size 128, less than required. 00:16:45.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:45.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:45.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:45.945 Initialization complete. Launching workers. 00:16:45.945 ======================================================== 00:16:45.945 Latency(us) 00:16:45.945 Device Information : IOPS MiB/s Average min max 00:16:45.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 184.96 0.09 954535.78 512.78 1013945.82 00:16:45.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.75 0.07 894908.90 617.83 1014748.70 00:16:45.945 ======================================================== 00:16:45.945 Total : 335.71 0.16 927760.94 512.78 1014748.70 00:16:45.945 00:16:45.945 [2024-07-10 23:20:54.841633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500001de00 (9): Bad file descriptor 00:16:45.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2368875 00:16:46.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2368875) - No such process 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2368875 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2368875 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2368875 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:46.516 [2024-07-10 23:20:55.365220] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2369403 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2369403 00:16:46.516 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:46.516 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.516 [2024-07-10 23:20:55.464547] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:47.084 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:47.084 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2369403 00:16:47.084 23:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:47.344 23:20:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:47.344 23:20:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2369403 00:16:47.344 23:20:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:47.912 23:20:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:47.912 23:20:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2369403 00:16:47.912 23:20:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:48.480 23:20:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:48.480 23:20:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2369403 00:16:48.480 23:20:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:49.049 23:20:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:49.049 23:20:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2369403 00:16:49.049 23:20:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:49.618 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:49.618 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2369403 00:16:49.618 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:49.618 Initializing NVMe Controllers 00:16:49.618 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:49.618 Controller IO queue size 128, less than required. 00:16:49.618 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:49.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:49.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:49.618 Initialization complete. Launching workers. 00:16:49.618 ======================================================== 00:16:49.618 Latency(us) 00:16:49.618 Device Information : IOPS MiB/s Average min max 00:16:49.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003637.64 1000196.26 1041285.52 00:16:49.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005861.44 1000204.84 1043267.46 00:16:49.618 ======================================================== 00:16:49.618 Total : 256.00 0.12 1004749.54 1000196.26 1043267.46 00:16:49.618 00:16:49.877 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:49.877 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2369403 00:16:49.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2369403) - No such process 00:16:49.877 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2369403 00:16:49.877 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:49.877 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:49.877 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:49.877 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:16:49.877 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:49.877 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:16:49.877 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:49.877 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:49.877 rmmod nvme_tcp 00:16:49.877 rmmod nvme_fabrics 00:16:50.136 rmmod nvme_keyring 00:16:50.136 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:50.136 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:16:50.136 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:16:50.136 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2368629 ']' 00:16:50.136 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2368629 00:16:50.136 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2368629 ']' 00:16:50.136 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2368629 00:16:50.136 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:16:50.136 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:50.136 23:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2368629 00:16:50.136 23:20:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:50.136 23:20:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:50.136 23:20:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2368629' 00:16:50.136 killing process with pid 2368629 00:16:50.136 23:20:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2368629 00:16:50.136 23:20:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2368629 00:16:51.515 23:21:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:51.515 23:21:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:51.515 23:21:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:51.515 23:21:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:51.515 23:21:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:51.515 23:21:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.515 23:21:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.515 23:21:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.422 23:21:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:53.422 00:16:53.422 real 0m17.201s 00:16:53.422 user 0m31.908s 00:16:53.422 sys 0m5.070s 00:16:53.422 23:21:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:53.422 23:21:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:53.422 ************************************ 00:16:53.422 END TEST nvmf_delete_subsystem 00:16:53.422 ************************************ 00:16:53.422 23:21:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:53.422 23:21:02 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:53.422 23:21:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:53.422 23:21:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:53.422 23:21:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:53.422 ************************************ 00:16:53.422 START TEST nvmf_ns_masking 00:16:53.422 ************************************ 00:16:53.422 23:21:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:53.681 * Looking for test storage... 00:16:53.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.681 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3012b0cf-7103-4fec-b36e-d077e796675b 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=174e65c1-fd4b-41f5-a19c-817a87ee51f7 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0db464af-1132-4b58-9781-7037df67520d 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:53.682 23:21:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:58.963 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:58.963 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:58.963 Found net devices under 0000:86:00.0: cvl_0_0 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:58.963 Found net devices under 0000:86:00.1: cvl_0_1 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:58.963 23:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:58.963 23:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:58.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:16:58.963 00:16:58.963 --- 10.0.0.2 ping statistics --- 00:16:58.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.963 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:16:58.963 23:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:58.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:16:58.963 00:16:58.963 --- 10.0.0.1 ping statistics --- 00:16:58.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.963 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:16:58.963 23:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.963 23:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:58.963 23:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:58.963 23:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.963 23:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:58.963 23:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:58.963 23:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.963 23:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:58.963 23:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:59.223 23:21:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:59.223 23:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:59.223 23:21:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:59.223 23:21:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:59.223 23:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2373862 00:16:59.223 23:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2373862 00:16:59.223 23:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:59.223 23:21:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2373862 ']' 00:16:59.223 23:21:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.223 23:21:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:59.223 23:21:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.223 23:21:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:59.223 23:21:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:59.223 [2024-07-10 23:21:08.127556] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:16:59.223 [2024-07-10 23:21:08.127642] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.223 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.223 [2024-07-10 23:21:08.238154] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.482 [2024-07-10 23:21:08.452594] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.482 [2024-07-10 23:21:08.452641] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.482 [2024-07-10 23:21:08.452653] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.482 [2024-07-10 23:21:08.452664] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.482 [2024-07-10 23:21:08.452673] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.482 [2024-07-10 23:21:08.452706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.051 23:21:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:00.051 23:21:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:17:00.051 23:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:00.051 23:21:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:00.051 23:21:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:00.051 23:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.051 23:21:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:00.051 [2024-07-10 23:21:09.087107] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.051 23:21:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:00.051 23:21:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:00.051 23:21:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:00.311 Malloc1 00:17:00.311 23:21:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:00.570 Malloc2 00:17:00.570 23:21:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:00.829 23:21:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:01.088 23:21:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.088 [2024-07-10 23:21:10.122992] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.088 23:21:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:01.088 23:21:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0db464af-1132-4b58-9781-7037df67520d -a 10.0.0.2 -s 4420 -i 4 00:17:01.347 23:21:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:01.347 23:21:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:01.347 23:21:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:01.347 23:21:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:01.347 23:21:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:03.253 23:21:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:03.253 23:21:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:03.253 23:21:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:03.253 23:21:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:03.253 23:21:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:03.253 23:21:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:03.253 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:03.253 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:03.253 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:03.253 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:03.253 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:03.253 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:03.253 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:03.253 [ 0]:0x1 00:17:03.253 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:03.253 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:03.511 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e6355d7ab3a42d193456974261fc90a 00:17:03.511 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e6355d7ab3a42d193456974261fc90a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:03.511 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:03.511 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:03.511 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:03.511 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:03.511 [ 0]:0x1 00:17:03.511 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:03.511 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:03.511 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e6355d7ab3a42d193456974261fc90a 00:17:03.511 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e6355d7ab3a42d193456974261fc90a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:03.511 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:03.511 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:03.511 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:03.511 [ 1]:0x2 00:17:03.511 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:03.511 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:03.769 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cc6a377a94ec49efaf4b5caf92e6c342 00:17:03.769 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cc6a377a94ec49efaf4b5caf92e6c342 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:03.769 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:03.769 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:03.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.769 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:04.027 23:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:04.027 23:21:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:04.027 23:21:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0db464af-1132-4b58-9781-7037df67520d -a 10.0.0.2 -s 4420 -i 4 00:17:04.285 23:21:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:04.285 23:21:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:04.285 23:21:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:04.285 23:21:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:04.285 23:21:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:04.285 23:21:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:06.186 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:06.186 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:06.186 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:06.186 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:06.186 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:06.186 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:06.186 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:06.186 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:06.445 [ 0]:0x2 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cc6a377a94ec49efaf4b5caf92e6c342 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cc6a377a94ec49efaf4b5caf92e6c342 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:06.445 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:06.704 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:06.704 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:06.704 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:06.704 [ 0]:0x1 00:17:06.704 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:06.704 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:06.704 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e6355d7ab3a42d193456974261fc90a 00:17:06.704 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e6355d7ab3a42d193456974261fc90a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:06.704 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:06.704 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:06.704 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:06.704 [ 1]:0x2 00:17:06.704 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:06.704 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:06.704 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cc6a377a94ec49efaf4b5caf92e6c342 00:17:06.704 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cc6a377a94ec49efaf4b5caf92e6c342 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:06.704 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:06.993 [ 0]:0x2 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cc6a377a94ec49efaf4b5caf92e6c342 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cc6a377a94ec49efaf4b5caf92e6c342 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:06.993 23:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:06.993 23:21:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:07.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.278 23:21:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:07.278 23:21:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:07.278 23:21:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0db464af-1132-4b58-9781-7037df67520d -a 10.0.0.2 -s 4420 -i 4 00:17:07.538 23:21:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:07.538 23:21:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:07.538 23:21:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:07.538 23:21:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:07.538 23:21:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:07.538 23:21:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:09.443 23:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:09.443 23:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:09.443 23:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:09.443 23:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:09.443 23:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:09.443 23:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:09.443 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:09.443 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:09.702 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:09.702 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:09.702 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:09.702 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:09.702 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:09.702 [ 0]:0x1 00:17:09.702 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:09.702 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:09.702 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e6355d7ab3a42d193456974261fc90a 00:17:09.702 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e6355d7ab3a42d193456974261fc90a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:09.702 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:09.702 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:09.702 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:09.702 [ 1]:0x2 00:17:09.962 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:09.962 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:09.962 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cc6a377a94ec49efaf4b5caf92e6c342 00:17:09.962 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cc6a377a94ec49efaf4b5caf92e6c342 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:09.962 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:09.962 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:09.962 23:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:09.962 23:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:09.962 23:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:09.962 23:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:09.962 23:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:09.962 23:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:09.962 23:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:09.962 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:09.962 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:09.962 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:09.962 23:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:10.221 [ 0]:0x2 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cc6a377a94ec49efaf4b5caf92e6c342 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cc6a377a94ec49efaf4b5caf92e6c342 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:10.221 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:10.480 [2024-07-10 23:21:19.297551] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:10.480 request: 00:17:10.480 { 00:17:10.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.480 "nsid": 2, 00:17:10.480 "host": "nqn.2016-06.io.spdk:host1", 00:17:10.480 "method": "nvmf_ns_remove_host", 00:17:10.480 "req_id": 1 00:17:10.480 } 00:17:10.480 Got JSON-RPC error response 00:17:10.480 response: 00:17:10.480 { 00:17:10.480 "code": -32602, 00:17:10.480 "message": "Invalid parameters" 00:17:10.480 } 00:17:10.480 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:10.481 [ 0]:0x2 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cc6a377a94ec49efaf4b5caf92e6c342 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cc6a377a94ec49efaf4b5caf92e6c342 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:10.481 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:10.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.740 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2376308 00:17:10.740 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.740 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2376308 /var/tmp/host.sock 00:17:10.740 23:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:10.740 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2376308 ']' 00:17:10.740 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:10.740 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.740 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:10.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:10.740 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.740 23:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:10.740 [2024-07-10 23:21:19.685911] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:17:10.740 [2024-07-10 23:21:19.686006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376308 ] 00:17:10.740 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.740 [2024-07-10 23:21:19.790899] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.000 [2024-07-10 23:21:20.019295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.938 23:21:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.938 23:21:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:17:11.938 23:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:12.196 23:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:12.196 23:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3012b0cf-7103-4fec-b36e-d077e796675b 00:17:12.196 23:21:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:12.455 23:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3012B0CF71034FECB36ED077E796675B -i 00:17:12.455 23:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 174e65c1-fd4b-41f5-a19c-817a87ee51f7 00:17:12.455 23:21:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:12.455 23:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 174E65C1FD4B41F5A19C817A87EE51F7 -i 00:17:12.715 23:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:12.715 23:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:12.974 23:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:12.974 23:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:13.232 nvme0n1 00:17:13.232 23:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:13.232 23:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:13.492 nvme1n2 00:17:13.492 23:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:13.492 23:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:13.492 23:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:13.492 23:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:13.492 23:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:13.751 23:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:13.751 23:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:13.751 23:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:13.751 23:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:14.010 23:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3012b0cf-7103-4fec-b36e-d077e796675b == \3\0\1\2\b\0\c\f\-\7\1\0\3\-\4\f\e\c\-\b\3\6\e\-\d\0\7\7\e\7\9\6\6\7\5\b ]] 00:17:14.010 23:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:14.010 23:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:14.010 23:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:14.010 23:21:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 174e65c1-fd4b-41f5-a19c-817a87ee51f7 == \1\7\4\e\6\5\c\1\-\f\d\4\b\-\4\1\f\5\-\a\1\9\c\-\8\1\7\a\8\7\e\e\5\1\f\7 ]] 00:17:14.010 23:21:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2376308 00:17:14.010 23:21:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2376308 ']' 00:17:14.010 23:21:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2376308 00:17:14.010 23:21:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:17:14.010 23:21:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:14.010 23:21:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2376308 00:17:14.268 23:21:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:14.268 23:21:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:14.268 23:21:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2376308' 00:17:14.268 killing process with pid 2376308 00:17:14.268 23:21:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2376308 00:17:14.268 23:21:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2376308 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:16.803 rmmod nvme_tcp 00:17:16.803 rmmod nvme_fabrics 00:17:16.803 rmmod nvme_keyring 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2373862 ']' 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2373862 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2373862 ']' 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2373862 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2373862 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2373862' 00:17:16.803 killing process with pid 2373862 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2373862 00:17:16.803 23:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2373862 00:17:18.710 23:21:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:18.710 23:21:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:18.710 23:21:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:18.710 23:21:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:18.710 23:21:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:18.710 23:21:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.710 23:21:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.710 23:21:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.618 23:21:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:20.618 00:17:20.618 real 0m27.045s 00:17:20.618 user 0m30.931s 00:17:20.618 sys 0m6.444s 00:17:20.618 23:21:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:20.618 23:21:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:20.618 ************************************ 00:17:20.618 END TEST nvmf_ns_masking 00:17:20.618 ************************************ 00:17:20.618 23:21:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:20.618 23:21:29 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:17:20.618 23:21:29 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:20.618 23:21:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:20.618 23:21:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:20.618 23:21:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:20.618 ************************************ 00:17:20.618 START TEST nvmf_nvme_cli 00:17:20.618 ************************************ 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:20.618 * Looking for test storage... 00:17:20.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.618 23:21:29 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.619 23:21:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.619 23:21:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.878 23:21:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.878 23:21:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:17:20.879 23:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:26.155 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:26.155 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:17:26.155 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:26.155 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:26.155 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:26.155 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:26.156 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:26.156 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:26.156 Found net devices under 0000:86:00.0: cvl_0_0 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:26.156 Found net devices under 0000:86:00.1: cvl_0_1 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:26.156 23:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:26.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:17:26.156 00:17:26.156 --- 10.0.0.2 ping statistics --- 00:17:26.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.156 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:26.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:17:26.156 00:17:26.156 --- 10.0.0.1 ping statistics --- 00:17:26.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.156 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2380997 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2380997 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2380997 ']' 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:26.156 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:26.156 [2024-07-10 23:21:35.179282] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:17:26.156 [2024-07-10 23:21:35.179385] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.415 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.415 [2024-07-10 23:21:35.287756] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:26.675 [2024-07-10 23:21:35.505377] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.675 [2024-07-10 23:21:35.505418] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.675 [2024-07-10 23:21:35.505430] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.675 [2024-07-10 23:21:35.505438] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.675 [2024-07-10 23:21:35.505447] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.675 [2024-07-10 23:21:35.505568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.675 [2024-07-10 23:21:35.505676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.675 [2024-07-10 23:21:35.505738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.675 [2024-07-10 23:21:35.505749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:26.934 23:21:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.934 23:21:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:17:26.934 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:26.934 23:21:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:26.934 23:21:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:26.934 23:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.934 23:21:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:26.934 23:21:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.934 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:27.193 [2024-07-10 23:21:36.005195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:27.193 Malloc0 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:27.193 Malloc1 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:27.193 [2024-07-10 23:21:36.232351] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.193 23:21:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:27.194 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.194 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:27.194 23:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.194 23:21:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:17:27.452 00:17:27.453 Discovery Log Number of Records 2, Generation counter 2 00:17:27.453 =====Discovery Log Entry 0====== 00:17:27.453 trtype: tcp 00:17:27.453 adrfam: ipv4 00:17:27.453 subtype: current discovery subsystem 00:17:27.453 treq: not required 00:17:27.453 portid: 0 00:17:27.453 trsvcid: 4420 00:17:27.453 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:27.453 traddr: 10.0.0.2 00:17:27.453 eflags: explicit discovery connections, duplicate discovery information 00:17:27.453 sectype: none 00:17:27.453 =====Discovery Log Entry 1====== 00:17:27.453 trtype: tcp 00:17:27.453 adrfam: ipv4 00:17:27.453 subtype: nvme subsystem 00:17:27.453 treq: not required 00:17:27.453 portid: 0 00:17:27.453 trsvcid: 4420 00:17:27.453 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:27.453 traddr: 10.0.0.2 00:17:27.453 eflags: none 00:17:27.453 sectype: none 00:17:27.453 23:21:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:27.453 23:21:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:27.453 23:21:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:27.453 23:21:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:27.453 23:21:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:27.453 23:21:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:27.453 23:21:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:27.453 23:21:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:27.453 23:21:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:27.453 23:21:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:27.453 23:21:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:28.830 23:21:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:28.830 23:21:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:17:28.830 23:21:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:28.830 23:21:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:28.830 23:21:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:28.830 23:21:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:17:30.734 /dev/nvme0n1 ]] 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:30.734 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:30.994 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:30.994 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:30.994 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:30.994 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:30.994 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:30.994 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:30.994 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:30.994 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:30.994 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:30.994 23:21:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:30.994 23:21:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:30.994 23:21:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:31.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.252 23:21:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:31.252 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:17:31.252 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:31.252 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.252 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:31.252 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.252 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:17:31.252 23:21:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:31.252 23:21:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.252 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.252 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.252 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.252 23:21:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:31.252 23:21:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:31.252 23:21:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:31.252 23:21:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:17:31.253 23:21:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:31.253 23:21:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:17:31.253 23:21:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:31.253 23:21:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:31.253 rmmod nvme_tcp 00:17:31.511 rmmod nvme_fabrics 00:17:31.511 rmmod nvme_keyring 00:17:31.511 23:21:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:31.511 23:21:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:17:31.511 23:21:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:17:31.511 23:21:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2380997 ']' 00:17:31.511 23:21:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2380997 00:17:31.511 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2380997 ']' 00:17:31.511 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2380997 00:17:31.511 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:17:31.511 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:31.511 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2380997 00:17:31.511 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:31.511 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:31.511 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2380997' 00:17:31.511 killing process with pid 2380997 00:17:31.511 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2380997 00:17:31.511 23:21:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2380997 00:17:33.486 23:21:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:33.486 23:21:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:33.486 23:21:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:33.486 23:21:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.486 23:21:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:33.486 23:21:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.486 23:21:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.486 23:21:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.391 23:21:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:35.391 00:17:35.391 real 0m14.691s 00:17:35.391 user 0m26.687s 00:17:35.391 sys 0m4.783s 00:17:35.391 23:21:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:35.391 23:21:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:35.391 ************************************ 00:17:35.391 END TEST nvmf_nvme_cli 00:17:35.391 ************************************ 00:17:35.391 23:21:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:35.391 23:21:44 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:17:35.391 23:21:44 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:35.391 23:21:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:35.391 23:21:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:35.391 23:21:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:35.391 ************************************ 00:17:35.391 START TEST nvmf_host_management 00:17:35.391 ************************************ 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:35.391 * Looking for test storage... 00:17:35.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.391 23:21:44 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.392 23:21:44 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.392 23:21:44 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.392 23:21:44 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:17:35.392 23:21:44 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.392 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:17:35.392 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.392 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.392 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.392 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.392 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.392 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.392 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.392 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.650 23:21:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:35.650 23:21:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:35.650 23:21:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:17:35.650 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:35.650 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.650 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:35.650 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:35.650 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:35.650 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.650 23:21:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.650 23:21:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.650 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:35.650 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:35.650 23:21:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:17:35.650 23:21:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.929 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:40.930 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:40.930 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:40.930 Found net devices under 0000:86:00.0: cvl_0_0 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:40.930 Found net devices under 0000:86:00.1: cvl_0_1 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:40.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:17:40.930 00:17:40.930 --- 10.0.0.2 ping statistics --- 00:17:40.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.930 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:40.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:17:40.930 00:17:40.930 --- 10.0.0.1 ping statistics --- 00:17:40.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.930 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2385489 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2385489 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2385489 ']' 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.930 23:21:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:40.930 [2024-07-10 23:21:49.848168] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:17:40.930 [2024-07-10 23:21:49.848253] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.930 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.930 [2024-07-10 23:21:49.960292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:41.190 [2024-07-10 23:21:50.187061] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.190 [2024-07-10 23:21:50.187109] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.190 [2024-07-10 23:21:50.187122] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.190 [2024-07-10 23:21:50.187130] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.190 [2024-07-10 23:21:50.187140] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.190 [2024-07-10 23:21:50.187209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.190 [2024-07-10 23:21:50.187301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:41.190 [2024-07-10 23:21:50.187403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.190 [2024-07-10 23:21:50.187438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:41.759 [2024-07-10 23:21:50.665000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:41.759 Malloc0 00:17:41.759 [2024-07-10 23:21:50.790462] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.759 23:21:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:41.760 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:41.760 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:42.018 23:21:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2385753 00:17:42.018 23:21:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2385753 /var/tmp/bdevperf.sock 00:17:42.018 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2385753 ']' 00:17:42.018 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.018 23:21:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:42.018 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:42.018 23:21:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:42.018 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.018 23:21:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:42.018 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:42.018 23:21:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:42.018 23:21:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:42.018 23:21:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:42.018 23:21:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:42.018 { 00:17:42.018 "params": { 00:17:42.018 "name": "Nvme$subsystem", 00:17:42.018 "trtype": "$TEST_TRANSPORT", 00:17:42.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.018 "adrfam": "ipv4", 00:17:42.018 "trsvcid": "$NVMF_PORT", 00:17:42.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.019 "hdgst": ${hdgst:-false}, 00:17:42.019 "ddgst": ${ddgst:-false} 00:17:42.019 }, 00:17:42.019 "method": "bdev_nvme_attach_controller" 00:17:42.019 } 00:17:42.019 EOF 00:17:42.019 )") 00:17:42.019 23:21:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:42.019 23:21:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:42.019 23:21:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:42.019 23:21:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:42.019 "params": { 00:17:42.019 "name": "Nvme0", 00:17:42.019 "trtype": "tcp", 00:17:42.019 "traddr": "10.0.0.2", 00:17:42.019 "adrfam": "ipv4", 00:17:42.019 "trsvcid": "4420", 00:17:42.019 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:42.019 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:42.019 "hdgst": false, 00:17:42.019 "ddgst": false 00:17:42.019 }, 00:17:42.019 "method": "bdev_nvme_attach_controller" 00:17:42.019 }' 00:17:42.019 [2024-07-10 23:21:50.907008] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:17:42.019 [2024-07-10 23:21:50.907103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2385753 ] 00:17:42.019 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.019 [2024-07-10 23:21:51.009524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.277 [2024-07-10 23:21:51.233191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.843 Running I/O for 10 seconds... 00:17:42.843 23:21:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.843 23:21:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:17:42.843 23:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:42.843 23:21:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.843 23:21:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:42.843 23:21:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.843 23:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:42.843 23:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:42.843 23:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:42.843 23:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:42.843 23:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:17:42.843 23:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:17:42.843 23:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:42.843 23:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:43.101 23:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:43.102 23:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:43.102 23:21:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.102 23:21:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:43.102 23:21:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.102 23:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:17:43.102 23:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:17:43.102 23:21:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:17:43.362 23:21:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:17:43.362 23:21:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:43.362 23:21:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:43.362 23:21:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:43.362 23:21:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.362 23:21:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:43.362 23:21:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.362 23:21:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:17:43.362 23:21:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:17:43.362 23:21:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:17:43.362 23:21:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:17:43.362 23:21:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:17:43.362 23:21:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:43.362 23:21:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.362 23:21:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:43.362 [2024-07-10 23:21:52.259863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.259921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.259932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.259948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.259956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.259965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.259973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.259982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.259991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.259999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.362 [2024-07-10 23:21:52.260260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:43.363 [2024-07-10 23:21:52.260544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.260988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.260998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.261009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.261018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.261029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.261039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.261050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.363 [2024-07-10 23:21:52.261059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.363 [2024-07-10 23:21:52.261070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.364 [2024-07-10 23:21:52.261733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.364 [2024-07-10 23:21:52.261744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.365 [2024-07-10 23:21:52.261753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.365 [2024-07-10 23:21:52.261764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.365 [2024-07-10 23:21:52.261773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.365 [2024-07-10 23:21:52.261784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.365 [2024-07-10 23:21:52.261793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.365 [2024-07-10 23:21:52.261804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.365 [2024-07-10 23:21:52.261813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.365 [2024-07-10 23:21:52.261824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.365 [2024-07-10 23:21:52.261833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.365 [2024-07-10 23:21:52.261843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.365 [2024-07-10 23:21:52.261853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.365 [2024-07-10 23:21:52.261864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.365 [2024-07-10 23:21:52.261875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.365 [2024-07-10 23:21:52.261886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.365 [2024-07-10 23:21:52.261895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.365 [2024-07-10 23:21:52.261906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.365 [2024-07-10 23:21:52.261915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.365 [2024-07-10 23:21:52.261926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.365 [2024-07-10 23:21:52.261935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.365 [2024-07-10 23:21:52.261945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032da00 is same with the state(5) to be set 00:17:43.365 [2024-07-10 23:21:52.262209] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500032da00 was disconnected and freed. reset controller. 00:17:43.365 [2024-07-10 23:21:52.263253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:43.365 23:21:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.365 23:21:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:43.365 task offset: 81920 on job bdev=Nvme0n1 fails 00:17:43.365 00:17:43.365 Latency(us) 00:17:43.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.365 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:43.365 Job: Nvme0n1 ended in about 0.42 seconds with error 00:17:43.365 Verification LBA range: start 0x0 length 0x400 00:17:43.365 Nvme0n1 : 0.42 1538.57 96.16 153.86 0.00 36776.07 4701.50 31457.28 00:17:43.365 =================================================================================================================== 00:17:43.365 Total : 1538.57 96.16 153.86 0.00 36776.07 4701.50 31457.28 00:17:43.365 23:21:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.365 23:21:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:43.365 [2024-07-10 23:21:52.268246] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:43.365 [2024-07-10 23:21:52.268278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:17:43.365 23:21:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.365 23:21:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:17:43.365 [2024-07-10 23:21:52.279111] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:44.303 23:21:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2385753 00:17:44.303 23:21:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:44.303 23:21:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:44.303 23:21:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:44.303 23:21:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:44.303 23:21:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:44.303 23:21:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:44.303 23:21:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:44.303 { 00:17:44.303 "params": { 00:17:44.303 "name": "Nvme$subsystem", 00:17:44.303 "trtype": "$TEST_TRANSPORT", 00:17:44.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.303 "adrfam": "ipv4", 00:17:44.303 "trsvcid": "$NVMF_PORT", 00:17:44.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.303 "hdgst": ${hdgst:-false}, 00:17:44.303 "ddgst": ${ddgst:-false} 00:17:44.303 }, 00:17:44.303 "method": "bdev_nvme_attach_controller" 00:17:44.303 } 00:17:44.303 EOF 00:17:44.303 )") 00:17:44.303 23:21:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:44.303 23:21:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:44.303 23:21:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:44.303 23:21:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:44.303 "params": { 00:17:44.303 "name": "Nvme0", 00:17:44.303 "trtype": "tcp", 00:17:44.303 "traddr": "10.0.0.2", 00:17:44.303 "adrfam": "ipv4", 00:17:44.303 "trsvcid": "4420", 00:17:44.303 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:44.303 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:44.303 "hdgst": false, 00:17:44.303 "ddgst": false 00:17:44.303 }, 00:17:44.303 "method": "bdev_nvme_attach_controller" 00:17:44.303 }' 00:17:44.303 [2024-07-10 23:21:53.353534] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:17:44.303 [2024-07-10 23:21:53.353623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2386216 ] 00:17:44.562 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.562 [2024-07-10 23:21:53.454483] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.821 [2024-07-10 23:21:53.686669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.387 Running I/O for 1 seconds... 00:17:46.324 00:17:46.324 Latency(us) 00:17:46.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.324 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:46.324 Verification LBA range: start 0x0 length 0x400 00:17:46.324 Nvme0n1 : 1.01 1652.44 103.28 0.00 0.00 38106.94 9972.87 31457.28 00:17:46.324 =================================================================================================================== 00:17:46.324 Total : 1652.44 103.28 0.00 0.00 38106.94 9972.87 31457.28 00:17:47.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2385753 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:17:47.263 23:21:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:47.263 23:21:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:47.263 23:21:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:47.263 23:21:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:47.263 23:21:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:47.263 23:21:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:47.263 23:21:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:47.522 23:21:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:47.523 rmmod nvme_tcp 00:17:47.523 rmmod nvme_fabrics 00:17:47.523 rmmod nvme_keyring 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2385489 ']' 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2385489 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2385489 ']' 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2385489 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2385489 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2385489' 00:17:47.523 killing process with pid 2385489 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2385489 00:17:47.523 23:21:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2385489 00:17:48.902 [2024-07-10 23:21:57.920012] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:49.161 23:21:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:49.161 23:21:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:49.161 23:21:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:49.161 23:21:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:49.161 23:21:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:49.161 23:21:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.161 23:21:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.161 23:21:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.066 23:22:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:51.066 23:22:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:51.066 00:17:51.066 real 0m15.713s 00:17:51.066 user 0m35.860s 00:17:51.066 sys 0m5.404s 00:17:51.066 23:22:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.066 23:22:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:51.066 ************************************ 00:17:51.066 END TEST nvmf_host_management 00:17:51.066 ************************************ 00:17:51.066 23:22:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:51.066 23:22:00 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:51.066 23:22:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:51.066 23:22:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.066 23:22:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:51.066 ************************************ 00:17:51.066 START TEST nvmf_lvol 00:17:51.066 ************************************ 00:17:51.066 23:22:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:51.325 * Looking for test storage... 00:17:51.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:51.325 23:22:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:51.325 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:51.325 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.325 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.325 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.325 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.325 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.325 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.325 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.325 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.325 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:51.326 23:22:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:56.598 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:56.598 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:56.598 Found net devices under 0000:86:00.0: cvl_0_0 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:56.598 Found net devices under 0000:86:00.1: cvl_0_1 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:56.598 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:56.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:17:56.599 00:17:56.599 --- 10.0.0.2 ping statistics --- 00:17:56.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.599 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:56.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:17:56.599 00:17:56.599 --- 10.0.0.1 ping statistics --- 00:17:56.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.599 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2390251 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2390251 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2390251 ']' 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.599 23:22:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:56.599 [2024-07-10 23:22:05.618521] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:17:56.599 [2024-07-10 23:22:05.618611] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.858 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.858 [2024-07-10 23:22:05.729131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:57.117 [2024-07-10 23:22:05.940674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.117 [2024-07-10 23:22:05.940718] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.117 [2024-07-10 23:22:05.940732] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.117 [2024-07-10 23:22:05.940741] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.117 [2024-07-10 23:22:05.940750] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.117 [2024-07-10 23:22:05.940874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.117 [2024-07-10 23:22:05.940981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.117 [2024-07-10 23:22:05.940994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.377 23:22:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.377 23:22:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:17:57.377 23:22:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:57.377 23:22:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:57.377 23:22:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:57.377 23:22:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.377 23:22:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:57.636 [2024-07-10 23:22:06.579382] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.636 23:22:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:57.896 23:22:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:57.896 23:22:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:58.155 23:22:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:58.155 23:22:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:58.415 23:22:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:58.673 23:22:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9fec6309-7260-41c1-8b2a-e4bebb5c9136 00:17:58.673 23:22:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9fec6309-7260-41c1-8b2a-e4bebb5c9136 lvol 20 00:17:58.673 23:22:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=46cddb22-ea69-41f8-916d-87efdfdc8314 00:17:58.673 23:22:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:58.932 23:22:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 46cddb22-ea69-41f8-916d-87efdfdc8314 00:17:59.191 23:22:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:59.191 [2024-07-10 23:22:08.205670] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.191 23:22:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:59.449 23:22:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2390807 00:17:59.449 23:22:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:59.449 23:22:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:59.449 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.480 23:22:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 46cddb22-ea69-41f8-916d-87efdfdc8314 MY_SNAPSHOT 00:18:00.739 23:22:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1fa326de-5745-4ee7-bd4b-e32348aeb415 00:18:00.739 23:22:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 46cddb22-ea69-41f8-916d-87efdfdc8314 30 00:18:00.999 23:22:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1fa326de-5745-4ee7-bd4b-e32348aeb415 MY_CLONE 00:18:01.259 23:22:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=df18513f-8c48-4f26-b0a3-721a80246c4d 00:18:01.259 23:22:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate df18513f-8c48-4f26-b0a3-721a80246c4d 00:18:01.828 23:22:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2390807 00:18:09.947 Initializing NVMe Controllers 00:18:09.947 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:09.947 Controller IO queue size 128, less than required. 00:18:09.947 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:09.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:09.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:09.947 Initialization complete. Launching workers. 00:18:09.947 ======================================================== 00:18:09.947 Latency(us) 00:18:09.947 Device Information : IOPS MiB/s Average min max 00:18:09.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11067.00 43.23 11568.15 528.73 189145.97 00:18:09.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10799.90 42.19 11858.72 4591.60 153528.90 00:18:09.947 ======================================================== 00:18:09.947 Total : 21866.90 85.42 11711.66 528.73 189145.97 00:18:09.947 00:18:09.947 23:22:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:10.205 23:22:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 46cddb22-ea69-41f8-916d-87efdfdc8314 00:18:10.205 23:22:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9fec6309-7260-41c1-8b2a-e4bebb5c9136 00:18:10.464 23:22:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:10.464 23:22:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:10.464 23:22:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:10.464 23:22:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:10.464 23:22:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:18:10.464 23:22:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:10.464 23:22:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:18:10.464 23:22:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:10.464 23:22:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:10.464 rmmod nvme_tcp 00:18:10.464 rmmod nvme_fabrics 00:18:10.464 rmmod nvme_keyring 00:18:10.464 23:22:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:10.723 23:22:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:18:10.723 23:22:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:18:10.723 23:22:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2390251 ']' 00:18:10.723 23:22:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2390251 00:18:10.723 23:22:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2390251 ']' 00:18:10.723 23:22:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2390251 00:18:10.723 23:22:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:18:10.723 23:22:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.723 23:22:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2390251 00:18:10.723 23:22:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:10.723 23:22:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:10.723 23:22:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2390251' 00:18:10.723 killing process with pid 2390251 00:18:10.723 23:22:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2390251 00:18:10.723 23:22:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2390251 00:18:12.628 23:22:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:12.628 23:22:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:12.628 23:22:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:12.628 23:22:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:12.628 23:22:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:12.628 23:22:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.628 23:22:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.628 23:22:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.531 23:22:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:14.531 00:18:14.531 real 0m23.241s 00:18:14.531 user 1m7.848s 00:18:14.531 sys 0m6.692s 00:18:14.531 23:22:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:14.531 23:22:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:18:14.531 ************************************ 00:18:14.531 END TEST nvmf_lvol 00:18:14.531 ************************************ 00:18:14.531 23:22:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:14.531 23:22:23 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:14.531 23:22:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:14.531 23:22:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.531 23:22:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:14.532 ************************************ 00:18:14.532 START TEST nvmf_lvs_grow 00:18:14.532 ************************************ 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:14.532 * Looking for test storage... 00:18:14.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:18:14.532 23:22:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:19.807 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:19.807 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:19.807 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:19.808 Found net devices under 0000:86:00.0: cvl_0_0 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:19.808 Found net devices under 0000:86:00.1: cvl_0_1 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:19.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:18:19.808 00:18:19.808 --- 10.0.0.2 ping statistics --- 00:18:19.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.808 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:19.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:18:19.808 00:18:19.808 --- 10.0.0.1 ping statistics --- 00:18:19.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.808 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2396300 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2396300 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2396300 ']' 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.808 23:22:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:19.808 [2024-07-10 23:22:28.865024] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:18:19.808 [2024-07-10 23:22:28.865108] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.067 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.067 [2024-07-10 23:22:28.972075] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.327 [2024-07-10 23:22:29.175119] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.327 [2024-07-10 23:22:29.175165] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.327 [2024-07-10 23:22:29.175177] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.327 [2024-07-10 23:22:29.175187] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.327 [2024-07-10 23:22:29.175196] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.327 [2024-07-10 23:22:29.175224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.586 23:22:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.586 23:22:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:18:20.586 23:22:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:20.586 23:22:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:20.586 23:22:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:20.845 23:22:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.845 23:22:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:20.845 [2024-07-10 23:22:29.824071] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.845 23:22:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:18:20.845 23:22:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:20.845 23:22:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:20.845 23:22:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:20.845 ************************************ 00:18:20.845 START TEST lvs_grow_clean 00:18:20.845 ************************************ 00:18:20.845 23:22:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:18:20.845 23:22:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:20.845 23:22:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:20.845 23:22:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:20.845 23:22:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:20.845 23:22:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:20.845 23:22:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:20.845 23:22:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:20.845 23:22:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:20.845 23:22:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:21.103 23:22:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:21.103 23:22:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:21.363 23:22:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ce96ea72-4e1e-402e-882f-255b16fffbf6 00:18:21.363 23:22:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce96ea72-4e1e-402e-882f-255b16fffbf6 00:18:21.363 23:22:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:21.364 23:22:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:21.364 23:22:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:21.364 23:22:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ce96ea72-4e1e-402e-882f-255b16fffbf6 lvol 150 00:18:21.623 23:22:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=db0b319b-38fc-4720-b46c-34ff16903dd3 00:18:21.623 23:22:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:21.623 23:22:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:21.882 [2024-07-10 23:22:30.762686] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:21.882 [2024-07-10 23:22:30.762757] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:21.882 true 00:18:21.882 23:22:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce96ea72-4e1e-402e-882f-255b16fffbf6 00:18:21.882 23:22:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:21.882 23:22:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:21.882 23:22:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:22.142 23:22:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 db0b319b-38fc-4720-b46c-34ff16903dd3 00:18:22.401 23:22:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:22.401 [2024-07-10 23:22:31.416756] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.401 23:22:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:22.660 23:22:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2396803 00:18:22.660 23:22:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:22.660 23:22:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:22.660 23:22:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2396803 /var/tmp/bdevperf.sock 00:18:22.660 23:22:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2396803 ']' 00:18:22.660 23:22:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.660 23:22:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.660 23:22:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.660 23:22:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.660 23:22:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:18:22.660 [2024-07-10 23:22:31.655668] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:18:22.660 [2024-07-10 23:22:31.655757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2396803 ] 00:18:22.660 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.919 [2024-07-10 23:22:31.757588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.919 [2024-07-10 23:22:31.981857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.488 23:22:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:23.488 23:22:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:18:23.488 23:22:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:23.747 Nvme0n1 00:18:23.747 23:22:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:24.006 [ 00:18:24.006 { 00:18:24.006 "name": "Nvme0n1", 00:18:24.006 "aliases": [ 00:18:24.006 "db0b319b-38fc-4720-b46c-34ff16903dd3" 00:18:24.006 ], 00:18:24.006 "product_name": "NVMe disk", 00:18:24.006 "block_size": 4096, 00:18:24.006 "num_blocks": 38912, 00:18:24.006 "uuid": "db0b319b-38fc-4720-b46c-34ff16903dd3", 00:18:24.006 "assigned_rate_limits": { 00:18:24.006 "rw_ios_per_sec": 0, 00:18:24.006 "rw_mbytes_per_sec": 0, 00:18:24.006 "r_mbytes_per_sec": 0, 00:18:24.006 "w_mbytes_per_sec": 0 00:18:24.006 }, 00:18:24.006 "claimed": false, 00:18:24.006 "zoned": false, 00:18:24.006 "supported_io_types": { 00:18:24.006 "read": true, 00:18:24.006 "write": true, 00:18:24.006 "unmap": true, 00:18:24.006 "flush": true, 00:18:24.006 "reset": true, 00:18:24.006 "nvme_admin": true, 00:18:24.006 "nvme_io": true, 00:18:24.006 "nvme_io_md": false, 00:18:24.006 "write_zeroes": true, 00:18:24.006 "zcopy": false, 00:18:24.006 "get_zone_info": false, 00:18:24.006 "zone_management": false, 00:18:24.006 "zone_append": false, 00:18:24.006 "compare": true, 00:18:24.006 "compare_and_write": true, 00:18:24.006 "abort": true, 00:18:24.006 "seek_hole": false, 00:18:24.006 "seek_data": false, 00:18:24.006 "copy": true, 00:18:24.006 "nvme_iov_md": false 00:18:24.006 }, 00:18:24.006 "memory_domains": [ 00:18:24.006 { 00:18:24.006 "dma_device_id": "system", 00:18:24.006 "dma_device_type": 1 00:18:24.006 } 00:18:24.006 ], 00:18:24.006 "driver_specific": { 00:18:24.006 "nvme": [ 00:18:24.006 { 00:18:24.006 "trid": { 00:18:24.006 "trtype": "TCP", 00:18:24.006 "adrfam": "IPv4", 00:18:24.007 "traddr": "10.0.0.2", 00:18:24.007 "trsvcid": "4420", 00:18:24.007 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:24.007 }, 00:18:24.007 "ctrlr_data": { 00:18:24.007 "cntlid": 1, 00:18:24.007 "vendor_id": "0x8086", 00:18:24.007 "model_number": "SPDK bdev Controller", 00:18:24.007 "serial_number": "SPDK0", 00:18:24.007 "firmware_revision": "24.09", 00:18:24.007 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:24.007 "oacs": { 00:18:24.007 "security": 0, 00:18:24.007 "format": 0, 00:18:24.007 "firmware": 0, 00:18:24.007 "ns_manage": 0 00:18:24.007 }, 00:18:24.007 "multi_ctrlr": true, 00:18:24.007 "ana_reporting": false 00:18:24.007 }, 00:18:24.007 "vs": { 00:18:24.007 "nvme_version": "1.3" 00:18:24.007 }, 00:18:24.007 "ns_data": { 00:18:24.007 "id": 1, 00:18:24.007 "can_share": true 00:18:24.007 } 00:18:24.007 } 00:18:24.007 ], 00:18:24.007 "mp_policy": "active_passive" 00:18:24.007 } 00:18:24.007 } 00:18:24.007 ] 00:18:24.007 23:22:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2397034 00:18:24.007 23:22:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:24.007 23:22:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:24.007 Running I/O for 10 seconds... 00:18:24.944 Latency(us) 00:18:24.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:24.944 Nvme0n1 : 1.00 20006.00 78.15 0.00 0.00 0.00 0.00 0.00 00:18:24.944 =================================================================================================================== 00:18:24.944 Total : 20006.00 78.15 0.00 0.00 0.00 0.00 0.00 00:18:24.944 00:18:25.881 23:22:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ce96ea72-4e1e-402e-882f-255b16fffbf6 00:18:25.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:25.881 Nvme0n1 : 2.00 20082.50 78.45 0.00 0.00 0.00 0.00 0.00 00:18:25.881 =================================================================================================================== 00:18:25.881 Total : 20082.50 78.45 0.00 0.00 0.00 0.00 0.00 00:18:25.881 00:18:26.140 true 00:18:26.140 23:22:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce96ea72-4e1e-402e-882f-255b16fffbf6 00:18:26.140 23:22:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:26.140 23:22:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:26.140 23:22:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:26.140 23:22:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2397034 00:18:27.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:27.078 Nvme0n1 : 3.00 20091.33 78.48 0.00 0.00 0.00 0.00 0.00 00:18:27.078 =================================================================================================================== 00:18:27.078 Total : 20091.33 78.48 0.00 0.00 0.00 0.00 0.00 00:18:27.078 00:18:28.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:28.015 Nvme0n1 : 4.00 20180.25 78.83 0.00 0.00 0.00 0.00 0.00 00:18:28.015 =================================================================================================================== 00:18:28.015 Total : 20180.25 78.83 0.00 0.00 0.00 0.00 0.00 00:18:28.015 00:18:29.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:29.052 Nvme0n1 : 5.00 20221.00 78.99 0.00 0.00 0.00 0.00 0.00 00:18:29.052 =================================================================================================================== 00:18:29.052 Total : 20221.00 78.99 0.00 0.00 0.00 0.00 0.00 00:18:29.052 00:18:29.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:29.990 Nvme0n1 : 6.00 20259.67 79.14 0.00 0.00 0.00 0.00 0.00 00:18:29.990 =================================================================================================================== 00:18:29.990 Total : 20259.67 79.14 0.00 0.00 0.00 0.00 0.00 00:18:29.990 00:18:30.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:30.927 Nvme0n1 : 7.00 20288.43 79.25 0.00 0.00 0.00 0.00 0.00 00:18:30.927 =================================================================================================================== 00:18:30.927 Total : 20288.43 79.25 0.00 0.00 0.00 0.00 0.00 00:18:30.927 00:18:32.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:32.305 Nvme0n1 : 8.00 20292.38 79.27 0.00 0.00 0.00 0.00 0.00 00:18:32.305 =================================================================================================================== 00:18:32.305 Total : 20292.38 79.27 0.00 0.00 0.00 0.00 0.00 00:18:32.305 00:18:33.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.243 Nvme0n1 : 9.00 20312.67 79.35 0.00 0.00 0.00 0.00 0.00 00:18:33.243 =================================================================================================================== 00:18:33.243 Total : 20312.67 79.35 0.00 0.00 0.00 0.00 0.00 00:18:33.243 00:18:34.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.180 Nvme0n1 : 10.00 20325.80 79.40 0.00 0.00 0.00 0.00 0.00 00:18:34.180 =================================================================================================================== 00:18:34.180 Total : 20325.80 79.40 0.00 0.00 0.00 0.00 0.00 00:18:34.180 00:18:34.180 00:18:34.180 Latency(us) 00:18:34.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.180 Nvme0n1 : 10.01 20324.49 79.39 0.00 0.00 6294.52 3846.68 15614.66 00:18:34.180 =================================================================================================================== 00:18:34.180 Total : 20324.49 79.39 0.00 0.00 6294.52 3846.68 15614.66 00:18:34.180 0 00:18:34.180 23:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2396803 00:18:34.180 23:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2396803 ']' 00:18:34.180 23:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2396803 00:18:34.180 23:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:18:34.180 23:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:34.180 23:22:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2396803 00:18:34.180 23:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:34.180 23:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:34.180 23:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2396803' 00:18:34.180 killing process with pid 2396803 00:18:34.180 23:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2396803 00:18:34.180 Received shutdown signal, test time was about 10.000000 seconds 00:18:34.180 00:18:34.180 Latency(us) 00:18:34.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.180 =================================================================================================================== 00:18:34.180 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.180 23:22:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2396803 00:18:35.114 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:35.372 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:35.372 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce96ea72-4e1e-402e-882f-255b16fffbf6 00:18:35.372 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:35.631 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:35.631 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:18:35.631 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:35.889 [2024-07-10 23:22:44.737967] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:35.889 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce96ea72-4e1e-402e-882f-255b16fffbf6 00:18:35.889 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:18:35.889 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce96ea72-4e1e-402e-882f-255b16fffbf6 00:18:35.889 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.889 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.889 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.889 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.889 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.889 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.889 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.889 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:35.890 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce96ea72-4e1e-402e-882f-255b16fffbf6 00:18:35.890 request: 00:18:35.890 { 00:18:35.890 "uuid": "ce96ea72-4e1e-402e-882f-255b16fffbf6", 00:18:35.890 "method": "bdev_lvol_get_lvstores", 00:18:35.890 "req_id": 1 00:18:35.890 } 00:18:35.890 Got JSON-RPC error response 00:18:35.890 response: 00:18:35.890 { 00:18:35.890 "code": -19, 00:18:35.890 "message": "No such device" 00:18:35.890 } 00:18:35.890 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:18:35.890 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:35.890 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:35.890 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:35.890 23:22:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:36.148 aio_bdev 00:18:36.148 23:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev db0b319b-38fc-4720-b46c-34ff16903dd3 00:18:36.148 23:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=db0b319b-38fc-4720-b46c-34ff16903dd3 00:18:36.148 23:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:36.148 23:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:18:36.148 23:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:36.148 23:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:36.148 23:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:36.407 23:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b db0b319b-38fc-4720-b46c-34ff16903dd3 -t 2000 00:18:36.407 [ 00:18:36.407 { 00:18:36.407 "name": "db0b319b-38fc-4720-b46c-34ff16903dd3", 00:18:36.407 "aliases": [ 00:18:36.407 "lvs/lvol" 00:18:36.407 ], 00:18:36.407 "product_name": "Logical Volume", 00:18:36.407 "block_size": 4096, 00:18:36.407 "num_blocks": 38912, 00:18:36.407 "uuid": "db0b319b-38fc-4720-b46c-34ff16903dd3", 00:18:36.407 "assigned_rate_limits": { 00:18:36.407 "rw_ios_per_sec": 0, 00:18:36.407 "rw_mbytes_per_sec": 0, 00:18:36.407 "r_mbytes_per_sec": 0, 00:18:36.407 "w_mbytes_per_sec": 0 00:18:36.407 }, 00:18:36.407 "claimed": false, 00:18:36.407 "zoned": false, 00:18:36.407 "supported_io_types": { 00:18:36.407 "read": true, 00:18:36.407 "write": true, 00:18:36.407 "unmap": true, 00:18:36.407 "flush": false, 00:18:36.407 "reset": true, 00:18:36.407 "nvme_admin": false, 00:18:36.407 "nvme_io": false, 00:18:36.407 "nvme_io_md": false, 00:18:36.407 "write_zeroes": true, 00:18:36.407 "zcopy": false, 00:18:36.407 "get_zone_info": false, 00:18:36.407 "zone_management": false, 00:18:36.407 "zone_append": false, 00:18:36.407 "compare": false, 00:18:36.407 "compare_and_write": false, 00:18:36.407 "abort": false, 00:18:36.407 "seek_hole": true, 00:18:36.407 "seek_data": true, 00:18:36.407 "copy": false, 00:18:36.407 "nvme_iov_md": false 00:18:36.407 }, 00:18:36.407 "driver_specific": { 00:18:36.407 "lvol": { 00:18:36.407 "lvol_store_uuid": "ce96ea72-4e1e-402e-882f-255b16fffbf6", 00:18:36.407 "base_bdev": "aio_bdev", 00:18:36.407 "thin_provision": false, 00:18:36.407 "num_allocated_clusters": 38, 00:18:36.407 "snapshot": false, 00:18:36.407 "clone": false, 00:18:36.407 "esnap_clone": false 00:18:36.407 } 00:18:36.407 } 00:18:36.407 } 00:18:36.407 ] 00:18:36.407 23:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:18:36.407 23:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce96ea72-4e1e-402e-882f-255b16fffbf6 00:18:36.407 23:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:36.666 23:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:36.666 23:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce96ea72-4e1e-402e-882f-255b16fffbf6 00:18:36.666 23:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:36.925 23:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:36.925 23:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete db0b319b-38fc-4720-b46c-34ff16903dd3 00:18:36.925 23:22:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ce96ea72-4e1e-402e-882f-255b16fffbf6 00:18:37.183 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:37.442 00:18:37.442 real 0m16.446s 00:18:37.442 user 0m16.140s 00:18:37.442 sys 0m1.393s 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:18:37.442 ************************************ 00:18:37.442 END TEST lvs_grow_clean 00:18:37.442 ************************************ 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:37.442 ************************************ 00:18:37.442 START TEST lvs_grow_dirty 00:18:37.442 ************************************ 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:37.442 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:37.701 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:37.701 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:37.701 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=99712357-75a2-45c3-b108-8be01a872c59 00:18:37.701 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99712357-75a2-45c3-b108-8be01a872c59 00:18:37.701 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:37.960 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:37.960 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:37.960 23:22:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 99712357-75a2-45c3-b108-8be01a872c59 lvol 150 00:18:38.219 23:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b1ee5cf0-b783-4abe-86e5-cdbbf066da02 00:18:38.219 23:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:38.219 23:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:38.219 [2024-07-10 23:22:47.247205] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:38.219 [2024-07-10 23:22:47.247274] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:38.219 true 00:18:38.219 23:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99712357-75a2-45c3-b108-8be01a872c59 00:18:38.219 23:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:38.478 23:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:38.478 23:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:38.737 23:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b1ee5cf0-b783-4abe-86e5-cdbbf066da02 00:18:38.737 23:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:38.996 [2024-07-10 23:22:47.937393] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.996 23:22:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:39.255 23:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2399613 00:18:39.255 23:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:39.255 23:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:39.255 23:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2399613 /var/tmp/bdevperf.sock 00:18:39.255 23:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2399613 ']' 00:18:39.255 23:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.255 23:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.255 23:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.255 23:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.255 23:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:39.255 [2024-07-10 23:22:48.167228] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:18:39.255 [2024-07-10 23:22:48.167316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399613 ] 00:18:39.255 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.255 [2024-07-10 23:22:48.266978] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.514 [2024-07-10 23:22:48.492162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.082 23:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.082 23:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:18:40.082 23:22:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:40.340 Nvme0n1 00:18:40.340 23:22:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:40.606 [ 00:18:40.607 { 00:18:40.607 "name": "Nvme0n1", 00:18:40.607 "aliases": [ 00:18:40.607 "b1ee5cf0-b783-4abe-86e5-cdbbf066da02" 00:18:40.607 ], 00:18:40.607 "product_name": "NVMe disk", 00:18:40.607 "block_size": 4096, 00:18:40.607 "num_blocks": 38912, 00:18:40.607 "uuid": "b1ee5cf0-b783-4abe-86e5-cdbbf066da02", 00:18:40.607 "assigned_rate_limits": { 00:18:40.607 "rw_ios_per_sec": 0, 00:18:40.607 "rw_mbytes_per_sec": 0, 00:18:40.607 "r_mbytes_per_sec": 0, 00:18:40.607 "w_mbytes_per_sec": 0 00:18:40.607 }, 00:18:40.607 "claimed": false, 00:18:40.607 "zoned": false, 00:18:40.607 "supported_io_types": { 00:18:40.607 "read": true, 00:18:40.607 "write": true, 00:18:40.607 "unmap": true, 00:18:40.607 "flush": true, 00:18:40.607 "reset": true, 00:18:40.607 "nvme_admin": true, 00:18:40.607 "nvme_io": true, 00:18:40.607 "nvme_io_md": false, 00:18:40.607 "write_zeroes": true, 00:18:40.607 "zcopy": false, 00:18:40.607 "get_zone_info": false, 00:18:40.607 "zone_management": false, 00:18:40.607 "zone_append": false, 00:18:40.607 "compare": true, 00:18:40.607 "compare_and_write": true, 00:18:40.607 "abort": true, 00:18:40.607 "seek_hole": false, 00:18:40.607 "seek_data": false, 00:18:40.607 "copy": true, 00:18:40.607 "nvme_iov_md": false 00:18:40.607 }, 00:18:40.607 "memory_domains": [ 00:18:40.607 { 00:18:40.607 "dma_device_id": "system", 00:18:40.607 "dma_device_type": 1 00:18:40.607 } 00:18:40.607 ], 00:18:40.607 "driver_specific": { 00:18:40.607 "nvme": [ 00:18:40.607 { 00:18:40.607 "trid": { 00:18:40.607 "trtype": "TCP", 00:18:40.607 "adrfam": "IPv4", 00:18:40.607 "traddr": "10.0.0.2", 00:18:40.607 "trsvcid": "4420", 00:18:40.607 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:40.607 }, 00:18:40.607 "ctrlr_data": { 00:18:40.607 "cntlid": 1, 00:18:40.607 "vendor_id": "0x8086", 00:18:40.607 "model_number": "SPDK bdev Controller", 00:18:40.607 "serial_number": "SPDK0", 00:18:40.607 "firmware_revision": "24.09", 00:18:40.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:40.607 "oacs": { 00:18:40.607 "security": 0, 00:18:40.607 "format": 0, 00:18:40.607 "firmware": 0, 00:18:40.607 "ns_manage": 0 00:18:40.607 }, 00:18:40.607 "multi_ctrlr": true, 00:18:40.607 "ana_reporting": false 00:18:40.607 }, 00:18:40.607 "vs": { 00:18:40.607 "nvme_version": "1.3" 00:18:40.607 }, 00:18:40.607 "ns_data": { 00:18:40.607 "id": 1, 00:18:40.607 "can_share": true 00:18:40.607 } 00:18:40.607 } 00:18:40.607 ], 00:18:40.607 "mp_policy": "active_passive" 00:18:40.607 } 00:18:40.607 } 00:18:40.607 ] 00:18:40.607 23:22:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2399846 00:18:40.607 23:22:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:40.607 23:22:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:40.607 Running I/O for 10 seconds... 00:18:41.544 Latency(us) 00:18:41.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:41.544 Nvme0n1 : 1.00 19789.00 77.30 0.00 0.00 0.00 0.00 0.00 00:18:41.544 =================================================================================================================== 00:18:41.544 Total : 19789.00 77.30 0.00 0.00 0.00 0.00 0.00 00:18:41.544 00:18:42.481 23:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 99712357-75a2-45c3-b108-8be01a872c59 00:18:42.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:42.481 Nvme0n1 : 2.00 20119.00 78.59 0.00 0.00 0.00 0.00 0.00 00:18:42.481 =================================================================================================================== 00:18:42.481 Total : 20119.00 78.59 0.00 0.00 0.00 0.00 0.00 00:18:42.481 00:18:42.741 true 00:18:42.741 23:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99712357-75a2-45c3-b108-8be01a872c59 00:18:42.741 23:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:43.000 23:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:43.000 23:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:43.000 23:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2399846 00:18:43.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:43.568 Nvme0n1 : 3.00 20208.00 78.94 0.00 0.00 0.00 0.00 0.00 00:18:43.568 =================================================================================================================== 00:18:43.568 Total : 20208.00 78.94 0.00 0.00 0.00 0.00 0.00 00:18:43.568 00:18:44.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:44.507 Nvme0n1 : 4.00 20267.75 79.17 0.00 0.00 0.00 0.00 0.00 00:18:44.507 =================================================================================================================== 00:18:44.507 Total : 20267.75 79.17 0.00 0.00 0.00 0.00 0.00 00:18:44.507 00:18:45.885 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:45.885 Nvme0n1 : 5.00 20329.00 79.41 0.00 0.00 0.00 0.00 0.00 00:18:45.885 =================================================================================================================== 00:18:45.885 Total : 20329.00 79.41 0.00 0.00 0.00 0.00 0.00 00:18:45.885 00:18:46.824 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:46.824 Nvme0n1 : 6.00 20358.33 79.52 0.00 0.00 0.00 0.00 0.00 00:18:46.824 =================================================================================================================== 00:18:46.824 Total : 20358.33 79.52 0.00 0.00 0.00 0.00 0.00 00:18:46.824 00:18:47.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:47.761 Nvme0n1 : 7.00 20382.86 79.62 0.00 0.00 0.00 0.00 0.00 00:18:47.761 =================================================================================================================== 00:18:47.761 Total : 20382.86 79.62 0.00 0.00 0.00 0.00 0.00 00:18:47.761 00:18:48.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:48.698 Nvme0n1 : 8.00 20406.75 79.71 0.00 0.00 0.00 0.00 0.00 00:18:48.698 =================================================================================================================== 00:18:48.698 Total : 20406.75 79.71 0.00 0.00 0.00 0.00 0.00 00:18:48.698 00:18:49.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:49.636 Nvme0n1 : 9.00 20425.33 79.79 0.00 0.00 0.00 0.00 0.00 00:18:49.636 =================================================================================================================== 00:18:49.636 Total : 20425.33 79.79 0.00 0.00 0.00 0.00 0.00 00:18:49.636 00:18:50.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:50.644 Nvme0n1 : 10.00 20442.90 79.86 0.00 0.00 0.00 0.00 0.00 00:18:50.644 =================================================================================================================== 00:18:50.644 Total : 20442.90 79.86 0.00 0.00 0.00 0.00 0.00 00:18:50.644 00:18:50.644 00:18:50.644 Latency(us) 00:18:50.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:50.644 Nvme0n1 : 10.01 20444.68 79.86 0.00 0.00 6257.63 3818.18 15956.59 00:18:50.644 =================================================================================================================== 00:18:50.644 Total : 20444.68 79.86 0.00 0.00 6257.63 3818.18 15956.59 00:18:50.644 0 00:18:50.644 23:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2399613 00:18:50.644 23:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2399613 ']' 00:18:50.644 23:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2399613 00:18:50.644 23:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:18:50.644 23:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.644 23:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2399613 00:18:50.644 23:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:50.644 23:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:50.644 23:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2399613' 00:18:50.644 killing process with pid 2399613 00:18:50.644 23:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2399613 00:18:50.644 Received shutdown signal, test time was about 10.000000 seconds 00:18:50.644 00:18:50.644 Latency(us) 00:18:50.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.644 =================================================================================================================== 00:18:50.644 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:50.644 23:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2399613 00:18:52.031 23:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:52.031 23:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:52.031 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99712357-75a2-45c3-b108-8be01a872c59 00:18:52.031 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:52.290 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:52.290 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:52.290 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2396300 00:18:52.290 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2396300 00:18:52.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2396300 Killed "${NVMF_APP[@]}" "$@" 00:18:52.290 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:52.291 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:52.291 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:52.291 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:52.291 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:52.291 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2401700 00:18:52.291 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2401700 00:18:52.291 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2401700 ']' 00:18:52.291 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.291 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.291 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.291 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.291 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:52.291 23:23:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:52.291 [2024-07-10 23:23:01.324341] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:18:52.291 [2024-07-10 23:23:01.324429] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.549 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.549 [2024-07-10 23:23:01.436276] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.807 [2024-07-10 23:23:01.640954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.807 [2024-07-10 23:23:01.640994] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.807 [2024-07-10 23:23:01.641005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.807 [2024-07-10 23:23:01.641016] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.807 [2024-07-10 23:23:01.641026] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.807 [2024-07-10 23:23:01.641057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.066 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.066 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:18:53.066 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.066 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:53.066 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:53.066 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.066 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:53.325 [2024-07-10 23:23:02.288101] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:53.325 [2024-07-10 23:23:02.288239] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:53.325 [2024-07-10 23:23:02.288279] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:53.325 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:53.325 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b1ee5cf0-b783-4abe-86e5-cdbbf066da02 00:18:53.325 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=b1ee5cf0-b783-4abe-86e5-cdbbf066da02 00:18:53.325 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:53.325 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:53.325 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:53.325 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:53.325 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:53.584 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b1ee5cf0-b783-4abe-86e5-cdbbf066da02 -t 2000 00:18:53.584 [ 00:18:53.584 { 00:18:53.584 "name": "b1ee5cf0-b783-4abe-86e5-cdbbf066da02", 00:18:53.584 "aliases": [ 00:18:53.584 "lvs/lvol" 00:18:53.584 ], 00:18:53.584 "product_name": "Logical Volume", 00:18:53.584 "block_size": 4096, 00:18:53.584 "num_blocks": 38912, 00:18:53.584 "uuid": "b1ee5cf0-b783-4abe-86e5-cdbbf066da02", 00:18:53.584 "assigned_rate_limits": { 00:18:53.584 "rw_ios_per_sec": 0, 00:18:53.584 "rw_mbytes_per_sec": 0, 00:18:53.584 "r_mbytes_per_sec": 0, 00:18:53.584 "w_mbytes_per_sec": 0 00:18:53.584 }, 00:18:53.584 "claimed": false, 00:18:53.584 "zoned": false, 00:18:53.584 "supported_io_types": { 00:18:53.584 "read": true, 00:18:53.584 "write": true, 00:18:53.584 "unmap": true, 00:18:53.584 "flush": false, 00:18:53.584 "reset": true, 00:18:53.584 "nvme_admin": false, 00:18:53.584 "nvme_io": false, 00:18:53.584 "nvme_io_md": false, 00:18:53.584 "write_zeroes": true, 00:18:53.584 "zcopy": false, 00:18:53.584 "get_zone_info": false, 00:18:53.584 "zone_management": false, 00:18:53.584 "zone_append": false, 00:18:53.584 "compare": false, 00:18:53.584 "compare_and_write": false, 00:18:53.584 "abort": false, 00:18:53.584 "seek_hole": true, 00:18:53.584 "seek_data": true, 00:18:53.584 "copy": false, 00:18:53.584 "nvme_iov_md": false 00:18:53.584 }, 00:18:53.584 "driver_specific": { 00:18:53.584 "lvol": { 00:18:53.584 "lvol_store_uuid": "99712357-75a2-45c3-b108-8be01a872c59", 00:18:53.584 "base_bdev": "aio_bdev", 00:18:53.584 "thin_provision": false, 00:18:53.584 "num_allocated_clusters": 38, 00:18:53.584 "snapshot": false, 00:18:53.584 "clone": false, 00:18:53.584 "esnap_clone": false 00:18:53.584 } 00:18:53.584 } 00:18:53.584 } 00:18:53.584 ] 00:18:53.844 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:53.844 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99712357-75a2-45c3-b108-8be01a872c59 00:18:53.844 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:53.844 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:53.844 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99712357-75a2-45c3-b108-8be01a872c59 00:18:53.844 23:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:54.103 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:54.103 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:54.103 [2024-07-10 23:23:03.144397] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:54.363 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99712357-75a2-45c3-b108-8be01a872c59 00:18:54.363 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:18:54.363 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99712357-75a2-45c3-b108-8be01a872c59 00:18:54.363 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:54.363 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.363 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:54.363 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.363 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:54.363 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.363 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:54.363 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:54.363 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99712357-75a2-45c3-b108-8be01a872c59 00:18:54.363 request: 00:18:54.363 { 00:18:54.363 "uuid": "99712357-75a2-45c3-b108-8be01a872c59", 00:18:54.363 "method": "bdev_lvol_get_lvstores", 00:18:54.363 "req_id": 1 00:18:54.363 } 00:18:54.363 Got JSON-RPC error response 00:18:54.363 response: 00:18:54.363 { 00:18:54.363 "code": -19, 00:18:54.363 "message": "No such device" 00:18:54.363 } 00:18:54.363 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:18:54.363 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:54.363 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:54.363 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:54.363 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:54.622 aio_bdev 00:18:54.622 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b1ee5cf0-b783-4abe-86e5-cdbbf066da02 00:18:54.622 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=b1ee5cf0-b783-4abe-86e5-cdbbf066da02 00:18:54.622 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:54.622 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:54.622 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:54.622 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:54.622 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:54.881 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b1ee5cf0-b783-4abe-86e5-cdbbf066da02 -t 2000 00:18:54.881 [ 00:18:54.881 { 00:18:54.881 "name": "b1ee5cf0-b783-4abe-86e5-cdbbf066da02", 00:18:54.881 "aliases": [ 00:18:54.881 "lvs/lvol" 00:18:54.881 ], 00:18:54.881 "product_name": "Logical Volume", 00:18:54.881 "block_size": 4096, 00:18:54.881 "num_blocks": 38912, 00:18:54.881 "uuid": "b1ee5cf0-b783-4abe-86e5-cdbbf066da02", 00:18:54.881 "assigned_rate_limits": { 00:18:54.881 "rw_ios_per_sec": 0, 00:18:54.881 "rw_mbytes_per_sec": 0, 00:18:54.881 "r_mbytes_per_sec": 0, 00:18:54.881 "w_mbytes_per_sec": 0 00:18:54.881 }, 00:18:54.882 "claimed": false, 00:18:54.882 "zoned": false, 00:18:54.882 "supported_io_types": { 00:18:54.882 "read": true, 00:18:54.882 "write": true, 00:18:54.882 "unmap": true, 00:18:54.882 "flush": false, 00:18:54.882 "reset": true, 00:18:54.882 "nvme_admin": false, 00:18:54.882 "nvme_io": false, 00:18:54.882 "nvme_io_md": false, 00:18:54.882 "write_zeroes": true, 00:18:54.882 "zcopy": false, 00:18:54.882 "get_zone_info": false, 00:18:54.882 "zone_management": false, 00:18:54.882 "zone_append": false, 00:18:54.882 "compare": false, 00:18:54.882 "compare_and_write": false, 00:18:54.882 "abort": false, 00:18:54.882 "seek_hole": true, 00:18:54.882 "seek_data": true, 00:18:54.882 "copy": false, 00:18:54.882 "nvme_iov_md": false 00:18:54.882 }, 00:18:54.882 "driver_specific": { 00:18:54.882 "lvol": { 00:18:54.882 "lvol_store_uuid": "99712357-75a2-45c3-b108-8be01a872c59", 00:18:54.882 "base_bdev": "aio_bdev", 00:18:54.882 "thin_provision": false, 00:18:54.882 "num_allocated_clusters": 38, 00:18:54.882 "snapshot": false, 00:18:54.882 "clone": false, 00:18:54.882 "esnap_clone": false 00:18:54.882 } 00:18:54.882 } 00:18:54.882 } 00:18:54.882 ] 00:18:54.882 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:54.882 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99712357-75a2-45c3-b108-8be01a872c59 00:18:54.882 23:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:55.140 23:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:55.140 23:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:55.140 23:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99712357-75a2-45c3-b108-8be01a872c59 00:18:55.400 23:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:55.400 23:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b1ee5cf0-b783-4abe-86e5-cdbbf066da02 00:18:55.400 23:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 99712357-75a2-45c3-b108-8be01a872c59 00:18:55.659 23:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:55.919 00:18:55.919 real 0m18.396s 00:18:55.919 user 0m47.252s 00:18:55.919 sys 0m3.761s 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:55.919 ************************************ 00:18:55.919 END TEST lvs_grow_dirty 00:18:55.919 ************************************ 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:55.919 nvmf_trace.0 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:55.919 rmmod nvme_tcp 00:18:55.919 rmmod nvme_fabrics 00:18:55.919 rmmod nvme_keyring 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2401700 ']' 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2401700 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2401700 ']' 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2401700 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2401700 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2401700' 00:18:55.919 killing process with pid 2401700 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2401700 00:18:55.919 23:23:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2401700 00:18:57.297 23:23:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:57.297 23:23:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:57.297 23:23:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:57.297 23:23:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:57.297 23:23:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:57.297 23:23:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.297 23:23:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.297 23:23:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.832 23:23:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:59.832 00:18:59.832 real 0m44.855s 00:18:59.832 user 1m10.082s 00:18:59.832 sys 0m9.599s 00:18:59.832 23:23:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:59.832 23:23:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:59.832 ************************************ 00:18:59.832 END TEST nvmf_lvs_grow 00:18:59.832 ************************************ 00:18:59.832 23:23:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:59.832 23:23:08 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:59.832 23:23:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:59.832 23:23:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.832 23:23:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:59.832 ************************************ 00:18:59.832 START TEST nvmf_bdev_io_wait 00:18:59.832 ************************************ 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:59.832 * Looking for test storage... 00:18:59.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.832 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:18:59.833 23:23:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:05.102 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:05.102 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.102 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:05.103 Found net devices under 0000:86:00.0: cvl_0_0 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:05.103 Found net devices under 0000:86:00.1: cvl_0_1 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:05.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:19:05.103 00:19:05.103 --- 10.0.0.2 ping statistics --- 00:19:05.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.103 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:19:05.103 00:19:05.103 --- 10.0.0.1 ping statistics --- 00:19:05.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.103 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2405962 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2405962 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2405962 ']' 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:05.103 23:23:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 [2024-07-10 23:23:13.675980] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:19:05.103 [2024-07-10 23:23:13.676066] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.103 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.103 [2024-07-10 23:23:13.783921] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:05.103 [2024-07-10 23:23:13.986176] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.103 [2024-07-10 23:23:13.986223] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.103 [2024-07-10 23:23:13.986235] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.103 [2024-07-10 23:23:13.986244] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.103 [2024-07-10 23:23:13.986253] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.103 [2024-07-10 23:23:13.986380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.103 [2024-07-10 23:23:13.986494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.103 [2024-07-10 23:23:13.986556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.103 [2024-07-10 23:23:13.986566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:05.670 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:05.670 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:19:05.670 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:05.670 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:05.670 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:05.670 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.670 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:05.670 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.670 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:05.670 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.670 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:05.670 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.670 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:05.930 [2024-07-10 23:23:14.786799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:05.930 Malloc0 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:05.930 [2024-07-10 23:23:14.915060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2406213 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2406215 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:05.930 { 00:19:05.930 "params": { 00:19:05.930 "name": "Nvme$subsystem", 00:19:05.930 "trtype": "$TEST_TRANSPORT", 00:19:05.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.930 "adrfam": "ipv4", 00:19:05.930 "trsvcid": "$NVMF_PORT", 00:19:05.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.930 "hdgst": ${hdgst:-false}, 00:19:05.930 "ddgst": ${ddgst:-false} 00:19:05.930 }, 00:19:05.930 "method": "bdev_nvme_attach_controller" 00:19:05.930 } 00:19:05.930 EOF 00:19:05.930 )") 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2406217 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:05.930 { 00:19:05.930 "params": { 00:19:05.930 "name": "Nvme$subsystem", 00:19:05.930 "trtype": "$TEST_TRANSPORT", 00:19:05.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.930 "adrfam": "ipv4", 00:19:05.930 "trsvcid": "$NVMF_PORT", 00:19:05.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.930 "hdgst": ${hdgst:-false}, 00:19:05.930 "ddgst": ${ddgst:-false} 00:19:05.930 }, 00:19:05.930 "method": "bdev_nvme_attach_controller" 00:19:05.930 } 00:19:05.930 EOF 00:19:05.930 )") 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2406220 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:05.930 { 00:19:05.930 "params": { 00:19:05.930 "name": "Nvme$subsystem", 00:19:05.930 "trtype": "$TEST_TRANSPORT", 00:19:05.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.930 "adrfam": "ipv4", 00:19:05.930 "trsvcid": "$NVMF_PORT", 00:19:05.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.930 "hdgst": ${hdgst:-false}, 00:19:05.930 "ddgst": ${ddgst:-false} 00:19:05.930 }, 00:19:05.930 "method": "bdev_nvme_attach_controller" 00:19:05.930 } 00:19:05.930 EOF 00:19:05.930 )") 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:19:05.930 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:19:05.931 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:05.931 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:05.931 { 00:19:05.931 "params": { 00:19:05.931 "name": "Nvme$subsystem", 00:19:05.931 "trtype": "$TEST_TRANSPORT", 00:19:05.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.931 "adrfam": "ipv4", 00:19:05.931 "trsvcid": "$NVMF_PORT", 00:19:05.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.931 "hdgst": ${hdgst:-false}, 00:19:05.931 "ddgst": ${ddgst:-false} 00:19:05.931 }, 00:19:05.931 "method": "bdev_nvme_attach_controller" 00:19:05.931 } 00:19:05.931 EOF 00:19:05.931 )") 00:19:05.931 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:19:05.931 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2406213 00:19:05.931 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:19:05.931 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:19:05.931 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:19:05.931 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:19:05.931 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:19:05.931 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:05.931 "params": { 00:19:05.931 "name": "Nvme1", 00:19:05.931 "trtype": "tcp", 00:19:05.931 "traddr": "10.0.0.2", 00:19:05.931 "adrfam": "ipv4", 00:19:05.931 "trsvcid": "4420", 00:19:05.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.931 "hdgst": false, 00:19:05.931 "ddgst": false 00:19:05.931 }, 00:19:05.931 "method": "bdev_nvme_attach_controller" 00:19:05.931 }' 00:19:05.931 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:19:05.931 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:19:05.931 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:05.931 "params": { 00:19:05.931 "name": "Nvme1", 00:19:05.931 "trtype": "tcp", 00:19:05.931 "traddr": "10.0.0.2", 00:19:05.931 "adrfam": "ipv4", 00:19:05.931 "trsvcid": "4420", 00:19:05.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.931 "hdgst": false, 00:19:05.931 "ddgst": false 00:19:05.931 }, 00:19:05.931 "method": "bdev_nvme_attach_controller" 00:19:05.931 }' 00:19:05.931 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:19:05.931 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:05.931 "params": { 00:19:05.931 "name": "Nvme1", 00:19:05.931 "trtype": "tcp", 00:19:05.931 "traddr": "10.0.0.2", 00:19:05.931 "adrfam": "ipv4", 00:19:05.931 "trsvcid": "4420", 00:19:05.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.931 "hdgst": false, 00:19:05.931 "ddgst": false 00:19:05.931 }, 00:19:05.931 "method": "bdev_nvme_attach_controller" 00:19:05.931 }' 00:19:05.931 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:19:05.931 23:23:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:05.931 "params": { 00:19:05.931 "name": "Nvme1", 00:19:05.931 "trtype": "tcp", 00:19:05.931 "traddr": "10.0.0.2", 00:19:05.931 "adrfam": "ipv4", 00:19:05.931 "trsvcid": "4420", 00:19:05.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.931 "hdgst": false, 00:19:05.931 "ddgst": false 00:19:05.931 }, 00:19:05.931 "method": "bdev_nvme_attach_controller" 00:19:05.931 }' 00:19:05.931 [2024-07-10 23:23:14.975040] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:19:05.931 [2024-07-10 23:23:14.975135] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:05.931 [2024-07-10 23:23:14.994826] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:19:05.931 [2024-07-10 23:23:14.994924] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:05.931 [2024-07-10 23:23:14.995646] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:19:05.931 [2024-07-10 23:23:14.995738] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:05.931 [2024-07-10 23:23:14.995954] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:19:05.931 [2024-07-10 23:23:14.996035] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:06.190 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.190 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.190 [2024-07-10 23:23:15.191232] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.190 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.449 [2024-07-10 23:23:15.282247] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.449 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.449 [2024-07-10 23:23:15.328786] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.449 [2024-07-10 23:23:15.390716] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.449 [2024-07-10 23:23:15.396478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:06.449 [2024-07-10 23:23:15.513951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:06.708 [2024-07-10 23:23:15.537090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:19:06.708 [2024-07-10 23:23:15.603288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:06.967 Running I/O for 1 seconds... 00:19:07.226 Running I/O for 1 seconds... 00:19:07.226 Running I/O for 1 seconds... 00:19:07.226 Running I/O for 1 seconds... 00:19:07.795 00:19:07.795 Latency(us) 00:19:07.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.795 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:07.795 Nvme1n1 : 1.01 11896.24 46.47 0.00 0.00 10722.07 5584.81 16868.40 00:19:07.795 =================================================================================================================== 00:19:07.795 Total : 11896.24 46.47 0.00 0.00 10722.07 5584.81 16868.40 00:19:08.054 00:19:08.054 Latency(us) 00:19:08.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.054 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:08.054 Nvme1n1 : 1.01 9176.86 35.85 0.00 0.00 13886.62 8377.21 24732.72 00:19:08.054 =================================================================================================================== 00:19:08.054 Total : 9176.86 35.85 0.00 0.00 13886.62 8377.21 24732.72 00:19:08.054 00:19:08.054 Latency(us) 00:19:08.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.054 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:08.054 Nvme1n1 : 1.00 216680.51 846.41 0.00 0.00 588.57 240.42 787.14 00:19:08.054 =================================================================================================================== 00:19:08.054 Total : 216680.51 846.41 0.00 0.00 588.57 240.42 787.14 00:19:08.314 00:19:08.314 Latency(us) 00:19:08.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.314 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:08.314 Nvme1n1 : 1.01 9949.35 38.86 0.00 0.00 12823.11 2621.44 19831.76 00:19:08.314 =================================================================================================================== 00:19:08.314 Total : 9949.35 38.86 0.00 0.00 12823.11 2621.44 19831.76 00:19:08.883 23:23:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2406215 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2406217 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2406220 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:09.452 rmmod nvme_tcp 00:19:09.452 rmmod nvme_fabrics 00:19:09.452 rmmod nvme_keyring 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2405962 ']' 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2405962 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2405962 ']' 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2405962 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2405962 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2405962' 00:19:09.452 killing process with pid 2405962 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2405962 00:19:09.452 23:23:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2405962 00:19:10.831 23:23:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:10.831 23:23:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:10.831 23:23:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:10.831 23:23:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:10.831 23:23:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:10.831 23:23:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.831 23:23:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.831 23:23:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.738 23:23:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:12.738 00:19:12.738 real 0m13.310s 00:19:12.738 user 0m32.821s 00:19:12.738 sys 0m5.918s 00:19:12.738 23:23:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:12.738 23:23:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:12.738 ************************************ 00:19:12.738 END TEST nvmf_bdev_io_wait 00:19:12.738 ************************************ 00:19:12.738 23:23:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:12.738 23:23:21 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:12.738 23:23:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:12.738 23:23:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:12.738 23:23:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:12.738 ************************************ 00:19:12.738 START TEST nvmf_queue_depth 00:19:12.738 ************************************ 00:19:12.738 23:23:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:12.738 * Looking for test storage... 00:19:12.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:19:12.997 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:19:12.998 23:23:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:18.368 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:18.368 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:18.368 Found net devices under 0000:86:00.0: cvl_0_0 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:18.368 Found net devices under 0000:86:00.1: cvl_0_1 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:18.368 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:18.369 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:18.369 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:18.369 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:18.369 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:18.369 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:18.369 23:23:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:18.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:18.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:19:18.369 00:19:18.369 --- 10.0.0.2 ping statistics --- 00:19:18.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.369 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:18.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:18.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:19:18.369 00:19:18.369 --- 10.0.0.1 ping statistics --- 00:19:18.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.369 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2410441 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2410441 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2410441 ']' 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:18.369 23:23:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:18.369 [2024-07-10 23:23:27.237114] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:19:18.369 [2024-07-10 23:23:27.237215] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.369 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.369 [2024-07-10 23:23:27.346917] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.627 [2024-07-10 23:23:27.554424] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.627 [2024-07-10 23:23:27.554465] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.627 [2024-07-10 23:23:27.554477] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.627 [2024-07-10 23:23:27.554489] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.627 [2024-07-10 23:23:27.554499] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.627 [2024-07-10 23:23:27.554531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.193 23:23:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.193 23:23:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:19:19.193 23:23:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:19.193 23:23:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:19.193 23:23:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:19.193 [2024-07-10 23:23:28.040513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:19.193 Malloc0 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:19.193 [2024-07-10 23:23:28.169081] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2410481 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2410481 /var/tmp/bdevperf.sock 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2410481 ']' 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:19.193 23:23:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:19.193 [2024-07-10 23:23:28.243733] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:19:19.193 [2024-07-10 23:23:28.243825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2410481 ] 00:19:19.451 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.452 [2024-07-10 23:23:28.347650] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.710 [2024-07-10 23:23:28.569456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.969 23:23:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.969 23:23:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:19:19.969 23:23:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:19.969 23:23:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.969 23:23:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:20.228 NVMe0n1 00:19:20.228 23:23:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.228 23:23:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:20.228 Running I/O for 10 seconds... 00:19:32.443 00:19:32.443 Latency(us) 00:19:32.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.443 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:32.443 Verification LBA range: start 0x0 length 0x4000 00:19:32.443 NVMe0n1 : 10.05 10490.75 40.98 0.00 0.00 97259.42 9459.98 66105.88 00:19:32.443 =================================================================================================================== 00:19:32.443 Total : 10490.75 40.98 0.00 0.00 97259.42 9459.98 66105.88 00:19:32.443 0 00:19:32.443 23:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2410481 00:19:32.443 23:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2410481 ']' 00:19:32.443 23:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2410481 00:19:32.443 23:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:19:32.443 23:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:32.443 23:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2410481 00:19:32.443 23:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:32.443 23:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:32.443 23:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2410481' 00:19:32.443 killing process with pid 2410481 00:19:32.443 23:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2410481 00:19:32.443 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.443 00:19:32.443 Latency(us) 00:19:32.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.443 =================================================================================================================== 00:19:32.443 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.443 23:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2410481 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:32.443 rmmod nvme_tcp 00:19:32.443 rmmod nvme_fabrics 00:19:32.443 rmmod nvme_keyring 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2410441 ']' 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2410441 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2410441 ']' 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2410441 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2410441 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2410441' 00:19:32.443 killing process with pid 2410441 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2410441 00:19:32.443 23:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2410441 00:19:33.012 23:23:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:33.012 23:23:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:33.012 23:23:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:33.012 23:23:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:33.012 23:23:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:33.012 23:23:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.012 23:23:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.012 23:23:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.547 23:23:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:35.547 00:19:35.547 real 0m22.343s 00:19:35.547 user 0m27.790s 00:19:35.547 sys 0m5.628s 00:19:35.547 23:23:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:35.547 23:23:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:35.547 ************************************ 00:19:35.547 END TEST nvmf_queue_depth 00:19:35.547 ************************************ 00:19:35.547 23:23:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:35.547 23:23:44 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:35.547 23:23:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:35.547 23:23:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.547 23:23:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:35.547 ************************************ 00:19:35.547 START TEST nvmf_target_multipath 00:19:35.547 ************************************ 00:19:35.547 23:23:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:35.547 * Looking for test storage... 00:19:35.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:19:35.548 23:23:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:40.822 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:40.822 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:40.822 Found net devices under 0000:86:00.0: cvl_0_0 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:40.822 Found net devices under 0000:86:00.1: cvl_0_1 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.822 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:40.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:19:40.822 00:19:40.823 --- 10.0.0.2 ping statistics --- 00:19:40.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.823 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:19:40.823 00:19:40.823 --- 10.0.0.1 ping statistics --- 00:19:40.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.823 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:19:40.823 only one NIC for nvmf test 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:40.823 rmmod nvme_tcp 00:19:40.823 rmmod nvme_fabrics 00:19:40.823 rmmod nvme_keyring 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.823 23:23:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.727 23:23:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.728 23:23:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:42.728 00:19:42.728 real 0m7.395s 00:19:42.728 user 0m1.535s 00:19:42.728 sys 0m3.850s 00:19:42.728 23:23:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:42.728 23:23:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:42.728 ************************************ 00:19:42.728 END TEST nvmf_target_multipath 00:19:42.728 ************************************ 00:19:42.728 23:23:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:42.728 23:23:51 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:42.728 23:23:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:42.728 23:23:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:42.728 23:23:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:42.728 ************************************ 00:19:42.728 START TEST nvmf_zcopy 00:19:42.728 ************************************ 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:42.728 * Looking for test storage... 00:19:42.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:19:42.728 23:23:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:48.002 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:48.002 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:48.003 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:48.003 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:48.003 Found net devices under 0000:86:00.0: cvl_0_0 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:48.003 Found net devices under 0000:86:00.1: cvl_0_1 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:48.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:19:48.003 00:19:48.003 --- 10.0.0.2 ping statistics --- 00:19:48.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.003 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:48.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:19:48.003 00:19:48.003 --- 10.0.0.1 ping statistics --- 00:19:48.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.003 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:48.003 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:48.004 23:23:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:48.004 23:23:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:48.004 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2419550 00:19:48.004 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2419550 00:19:48.004 23:23:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:48.004 23:23:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2419550 ']' 00:19:48.004 23:23:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.004 23:23:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.004 23:23:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.004 23:23:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.004 23:23:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:48.004 [2024-07-10 23:23:56.968353] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:19:48.004 [2024-07-10 23:23:56.968435] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.004 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.263 [2024-07-10 23:23:57.078859] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.263 [2024-07-10 23:23:57.283343] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.263 [2024-07-10 23:23:57.283389] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.263 [2024-07-10 23:23:57.283402] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.263 [2024-07-10 23:23:57.283413] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.263 [2024-07-10 23:23:57.283422] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.263 [2024-07-10 23:23:57.283451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:48.832 [2024-07-10 23:23:57.761513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:48.832 [2024-07-10 23:23:57.781716] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:48.832 malloc0 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.832 { 00:19:48.832 "params": { 00:19:48.832 "name": "Nvme$subsystem", 00:19:48.832 "trtype": "$TEST_TRANSPORT", 00:19:48.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.832 "adrfam": "ipv4", 00:19:48.832 "trsvcid": "$NVMF_PORT", 00:19:48.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.832 "hdgst": ${hdgst:-false}, 00:19:48.832 "ddgst": ${ddgst:-false} 00:19:48.832 }, 00:19:48.832 "method": "bdev_nvme_attach_controller" 00:19:48.832 } 00:19:48.832 EOF 00:19:48.832 )") 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:48.832 23:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:48.832 "params": { 00:19:48.832 "name": "Nvme1", 00:19:48.832 "trtype": "tcp", 00:19:48.832 "traddr": "10.0.0.2", 00:19:48.832 "adrfam": "ipv4", 00:19:48.832 "trsvcid": "4420", 00:19:48.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.832 "hdgst": false, 00:19:48.832 "ddgst": false 00:19:48.832 }, 00:19:48.832 "method": "bdev_nvme_attach_controller" 00:19:48.832 }' 00:19:49.092 [2024-07-10 23:23:57.934363] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:19:49.092 [2024-07-10 23:23:57.934449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2419714 ] 00:19:49.092 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.092 [2024-07-10 23:23:58.038641] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.351 [2024-07-10 23:23:58.253045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.920 Running I/O for 10 seconds... 00:20:00.006 00:20:00.006 Latency(us) 00:20:00.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.006 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:20:00.006 Verification LBA range: start 0x0 length 0x1000 00:20:00.006 Nvme1n1 : 10.01 7402.83 57.83 0.00 0.00 17241.25 329.46 24618.74 00:20:00.006 =================================================================================================================== 00:20:00.006 Total : 7402.83 57.83 0.00 0.00 17241.25 329.46 24618.74 00:20:00.946 23:24:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:20:00.946 23:24:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2422133 00:20:00.946 23:24:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:20:00.946 23:24:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:00.946 23:24:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:20:00.946 23:24:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:20:00.946 23:24:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:20:00.946 23:24:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.946 23:24:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.946 { 00:20:00.946 "params": { 00:20:00.946 "name": "Nvme$subsystem", 00:20:00.946 "trtype": "$TEST_TRANSPORT", 00:20:00.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.946 "adrfam": "ipv4", 00:20:00.946 "trsvcid": "$NVMF_PORT", 00:20:00.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.946 "hdgst": ${hdgst:-false}, 00:20:00.946 "ddgst": ${ddgst:-false} 00:20:00.946 }, 00:20:00.946 "method": "bdev_nvme_attach_controller" 00:20:00.946 } 00:20:00.946 EOF 00:20:00.946 )") 00:20:00.946 23:24:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:20:00.946 [2024-07-10 23:24:09.894930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.946 [2024-07-10 23:24:09.894978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.946 23:24:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:20:00.946 23:24:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:20:00.946 23:24:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:00.946 "params": { 00:20:00.946 "name": "Nvme1", 00:20:00.946 "trtype": "tcp", 00:20:00.946 "traddr": "10.0.0.2", 00:20:00.946 "adrfam": "ipv4", 00:20:00.946 "trsvcid": "4420", 00:20:00.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.946 "hdgst": false, 00:20:00.946 "ddgst": false 00:20:00.946 }, 00:20:00.946 "method": "bdev_nvme_attach_controller" 00:20:00.946 }' 00:20:00.946 [2024-07-10 23:24:09.906903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.946 [2024-07-10 23:24:09.906933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.946 [2024-07-10 23:24:09.914939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.946 [2024-07-10 23:24:09.914965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.946 [2024-07-10 23:24:09.922937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.946 [2024-07-10 23:24:09.922960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.946 [2024-07-10 23:24:09.934952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.946 [2024-07-10 23:24:09.934974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.946 [2024-07-10 23:24:09.943389] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:20:00.946 [2024-07-10 23:24:09.943467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2422133 ] 00:20:00.946 [2024-07-10 23:24:09.946999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.946 [2024-07-10 23:24:09.947020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.946 [2024-07-10 23:24:09.959028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.946 [2024-07-10 23:24:09.959048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.946 [2024-07-10 23:24:09.971049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.946 [2024-07-10 23:24:09.971070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.946 [2024-07-10 23:24:09.983094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.946 [2024-07-10 23:24:09.983114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.946 [2024-07-10 23:24:09.995122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.946 [2024-07-10 23:24:09.995142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.946 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.946 [2024-07-10 23:24:10.007184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.946 [2024-07-10 23:24:10.007205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.206 [2024-07-10 23:24:10.019210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.206 [2024-07-10 23:24:10.019231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.206 [2024-07-10 23:24:10.031220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.206 [2024-07-10 23:24:10.031239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.206 [2024-07-10 23:24:10.043267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.206 [2024-07-10 23:24:10.043290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.206 [2024-07-10 23:24:10.046066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.206 [2024-07-10 23:24:10.055308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.206 [2024-07-10 23:24:10.055330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.206 [2024-07-10 23:24:10.067334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.206 [2024-07-10 23:24:10.067357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.206 [2024-07-10 23:24:10.079365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.206 [2024-07-10 23:24:10.079386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.206 [2024-07-10 23:24:10.091397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.206 [2024-07-10 23:24:10.091417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.206 [2024-07-10 23:24:10.103430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.206 [2024-07-10 23:24:10.103449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.206 [2024-07-10 23:24:10.115464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.206 [2024-07-10 23:24:10.115482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.206 [2024-07-10 23:24:10.127483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.206 [2024-07-10 23:24:10.127501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.206 [2024-07-10 23:24:10.139536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.206 [2024-07-10 23:24:10.139554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.206 [2024-07-10 23:24:10.151561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.206 [2024-07-10 23:24:10.151580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.206 [2024-07-10 23:24:10.163611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.207 [2024-07-10 23:24:10.163632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.207 [2024-07-10 23:24:10.175635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.207 [2024-07-10 23:24:10.175655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.207 [2024-07-10 23:24:10.187659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.207 [2024-07-10 23:24:10.187679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.207 [2024-07-10 23:24:10.199700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.207 [2024-07-10 23:24:10.199719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.207 [2024-07-10 23:24:10.211730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.207 [2024-07-10 23:24:10.211749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.207 [2024-07-10 23:24:10.223759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.207 [2024-07-10 23:24:10.223778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.207 [2024-07-10 23:24:10.235815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.207 [2024-07-10 23:24:10.235834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.207 [2024-07-10 23:24:10.247832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.207 [2024-07-10 23:24:10.247851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.207 [2024-07-10 23:24:10.259855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.207 [2024-07-10 23:24:10.259875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.207 [2024-07-10 23:24:10.268670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.207 [2024-07-10 23:24:10.271903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.207 [2024-07-10 23:24:10.271922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.466 [2024-07-10 23:24:10.283931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.466 [2024-07-10 23:24:10.283953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.466 [2024-07-10 23:24:10.295977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.466 [2024-07-10 23:24:10.295999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.466 [2024-07-10 23:24:10.308017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.466 [2024-07-10 23:24:10.308037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.466 [2024-07-10 23:24:10.320024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.466 [2024-07-10 23:24:10.320044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.466 [2024-07-10 23:24:10.332067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.466 [2024-07-10 23:24:10.332087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.466 [2024-07-10 23:24:10.344102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.466 [2024-07-10 23:24:10.344122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.467 [2024-07-10 23:24:10.356125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.467 [2024-07-10 23:24:10.356146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.467 [2024-07-10 23:24:10.368188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.467 [2024-07-10 23:24:10.368213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.467 [2024-07-10 23:24:10.380223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.467 [2024-07-10 23:24:10.380248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.467 [2024-07-10 23:24:10.392247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.467 [2024-07-10 23:24:10.392275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.467 [2024-07-10 23:24:10.404275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.467 [2024-07-10 23:24:10.404296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.467 [2024-07-10 23:24:10.416296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.467 [2024-07-10 23:24:10.416316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.467 [2024-07-10 23:24:10.428340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.467 [2024-07-10 23:24:10.428360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.467 [2024-07-10 23:24:10.440369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.467 [2024-07-10 23:24:10.440388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.467 [2024-07-10 23:24:10.452409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.467 [2024-07-10 23:24:10.452428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.467 [2024-07-10 23:24:10.464439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.467 [2024-07-10 23:24:10.464458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.467 [2024-07-10 23:24:10.476463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.467 [2024-07-10 23:24:10.476482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.467 [2024-07-10 23:24:10.488507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.467 [2024-07-10 23:24:10.488529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.467 [2024-07-10 23:24:10.500536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.467 [2024-07-10 23:24:10.500555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.467 [2024-07-10 23:24:10.512558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.467 [2024-07-10 23:24:10.512578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.467 [2024-07-10 23:24:10.524609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.467 [2024-07-10 23:24:10.524628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.536638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.536659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.548662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.548682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.560705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.560725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.572728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.572748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.584772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.584792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.596817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.596837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.608827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.608846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.620872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.620891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.632901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.632921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.644926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.644945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.656982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.657004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.669008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.669030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.681046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.681067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.693080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.693101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.705100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.705120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.717142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.717173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.729187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.729206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.741220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.741240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.753242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.753272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.765263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.765283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.777315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.777335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.727 [2024-07-10 23:24:10.789345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.727 [2024-07-10 23:24:10.789365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.987 [2024-07-10 23:24:10.801364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.987 [2024-07-10 23:24:10.801384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.987 [2024-07-10 23:24:10.813411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.987 [2024-07-10 23:24:10.813430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.987 [2024-07-10 23:24:10.825444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.987 [2024-07-10 23:24:10.825463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.987 [2024-07-10 23:24:10.874488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.987 [2024-07-10 23:24:10.874512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.987 [2024-07-10 23:24:10.885618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.987 [2024-07-10 23:24:10.885638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.987 Running I/O for 5 seconds... 00:20:01.987 [2024-07-10 23:24:10.902278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.987 [2024-07-10 23:24:10.902302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.987 [2024-07-10 23:24:10.919416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.987 [2024-07-10 23:24:10.919441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.987 [2024-07-10 23:24:10.935689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.987 [2024-07-10 23:24:10.935713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.987 [2024-07-10 23:24:10.947721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.987 [2024-07-10 23:24:10.947745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.987 [2024-07-10 23:24:10.963483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.987 [2024-07-10 23:24:10.963507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.987 [2024-07-10 23:24:10.980488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.987 [2024-07-10 23:24:10.980512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.987 [2024-07-10 23:24:10.996760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.987 [2024-07-10 23:24:10.996783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.987 [2024-07-10 23:24:11.013095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.987 [2024-07-10 23:24:11.013122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.987 [2024-07-10 23:24:11.024288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.987 [2024-07-10 23:24:11.024311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.987 [2024-07-10 23:24:11.040803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.987 [2024-07-10 23:24:11.040827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.057120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.057144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.069069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.069093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.078954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.078978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.094403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.094427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.111302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.111326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.127653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.127677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.144553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.144577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.160628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.160652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.176934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.176958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.189735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.189759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.205656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.205679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.222154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.222185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.238342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.238366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.247157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.247187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.259314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.259348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.275257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.275282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.291605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.291629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.247 [2024-07-10 23:24:11.308136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.247 [2024-07-10 23:24:11.308166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.507 [2024-07-10 23:24:11.319450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.507 [2024-07-10 23:24:11.319476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.507 [2024-07-10 23:24:11.336157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.507 [2024-07-10 23:24:11.336188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.507 [2024-07-10 23:24:11.352393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.507 [2024-07-10 23:24:11.352417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.507 [2024-07-10 23:24:11.369187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.507 [2024-07-10 23:24:11.369210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.507 [2024-07-10 23:24:11.385868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.507 [2024-07-10 23:24:11.385892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.507 [2024-07-10 23:24:11.402702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.507 [2024-07-10 23:24:11.402727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.507 [2024-07-10 23:24:11.419494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.507 [2024-07-10 23:24:11.419519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.507 [2024-07-10 23:24:11.435921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.507 [2024-07-10 23:24:11.435944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.507 [2024-07-10 23:24:11.452462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.507 [2024-07-10 23:24:11.452486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.507 [2024-07-10 23:24:11.463334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.507 [2024-07-10 23:24:11.463358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.507 [2024-07-10 23:24:11.479735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.507 [2024-07-10 23:24:11.479758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.507 [2024-07-10 23:24:11.496190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.507 [2024-07-10 23:24:11.496214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.507 [2024-07-10 23:24:11.511984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.507 [2024-07-10 23:24:11.512008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.507 [2024-07-10 23:24:11.524115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.507 [2024-07-10 23:24:11.524140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.507 [2024-07-10 23:24:11.540104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.507 [2024-07-10 23:24:11.540128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.507 [2024-07-10 23:24:11.556280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.507 [2024-07-10 23:24:11.556304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.507 [2024-07-10 23:24:11.568533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.507 [2024-07-10 23:24:11.568556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.583963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.583987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.600743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.600768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.612669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.612694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.627234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.627259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.641998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.642022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.654179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.654220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.669514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.669537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.680566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.680590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.696731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.696755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.713142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.713173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.729834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.729858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.746093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.746119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.761118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.761142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.769782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.769807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.781725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.781751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.799185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.799211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.813237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.813261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.767 [2024-07-10 23:24:11.829970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.767 [2024-07-10 23:24:11.829995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:11.845929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:11.845954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:11.856758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:11.856782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:11.872738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:11.872763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:11.883562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:11.883585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:11.900203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:11.900227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:11.916241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:11.916265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:11.928024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:11.928049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:11.943820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:11.943843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:11.960243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:11.960267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:11.976621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:11.976646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:11.988333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:11.988357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:12.004333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:12.004358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:12.020705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:12.020729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:12.032711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:12.032735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:12.049361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:12.049397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:12.065694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:12.065719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:12.077715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:12.077739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.027 [2024-07-10 23:24:12.093446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.027 [2024-07-10 23:24:12.093471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.287 [2024-07-10 23:24:12.109607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.287 [2024-07-10 23:24:12.109631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.287 [2024-07-10 23:24:12.126482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.287 [2024-07-10 23:24:12.126510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.287 [2024-07-10 23:24:12.142959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.287 [2024-07-10 23:24:12.142983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.287 [2024-07-10 23:24:12.159616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.287 [2024-07-10 23:24:12.159640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.287 [2024-07-10 23:24:12.176240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.287 [2024-07-10 23:24:12.176264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.287 [2024-07-10 23:24:12.192728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.287 [2024-07-10 23:24:12.192751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.287 [2024-07-10 23:24:12.208899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.287 [2024-07-10 23:24:12.208923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.287 [2024-07-10 23:24:12.217681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.287 [2024-07-10 23:24:12.217705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.287 [2024-07-10 23:24:12.233976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.287 [2024-07-10 23:24:12.234001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.287 [2024-07-10 23:24:12.250323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.287 [2024-07-10 23:24:12.250347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.287 [2024-07-10 23:24:12.261965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.287 [2024-07-10 23:24:12.261988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.287 [2024-07-10 23:24:12.278844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.287 [2024-07-10 23:24:12.278867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.287 [2024-07-10 23:24:12.295120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.287 [2024-07-10 23:24:12.295143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.287 [2024-07-10 23:24:12.311624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.287 [2024-07-10 23:24:12.311648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.287 [2024-07-10 23:24:12.328281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.287 [2024-07-10 23:24:12.328304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.287 [2024-07-10 23:24:12.339977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.287 [2024-07-10 23:24:12.340002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.547 [2024-07-10 23:24:12.356230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.356254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.548 [2024-07-10 23:24:12.372428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.372451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.548 [2024-07-10 23:24:12.389612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.389636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.548 [2024-07-10 23:24:12.405946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.405970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.548 [2024-07-10 23:24:12.422274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.422302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.548 [2024-07-10 23:24:12.434123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.434147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.548 [2024-07-10 23:24:12.450261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.450286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.548 [2024-07-10 23:24:12.466111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.466135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.548 [2024-07-10 23:24:12.476800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.476824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.548 [2024-07-10 23:24:12.493798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.493824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.548 [2024-07-10 23:24:12.504959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.504983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.548 [2024-07-10 23:24:12.522077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.522101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.548 [2024-07-10 23:24:12.537128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.537152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.548 [2024-07-10 23:24:12.548970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.548994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.548 [2024-07-10 23:24:12.565033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.565057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.548 [2024-07-10 23:24:12.581076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.581100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.548 [2024-07-10 23:24:12.597231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.597254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.548 [2024-07-10 23:24:12.611330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.548 [2024-07-10 23:24:12.611355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.807 [2024-07-10 23:24:12.626875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.807 [2024-07-10 23:24:12.626899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.807 [2024-07-10 23:24:12.643266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.807 [2024-07-10 23:24:12.643290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.807 [2024-07-10 23:24:12.660326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.807 [2024-07-10 23:24:12.660350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.807 [2024-07-10 23:24:12.676407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.807 [2024-07-10 23:24:12.676431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.808 [2024-07-10 23:24:12.688425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.808 [2024-07-10 23:24:12.688448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.808 [2024-07-10 23:24:12.705273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.808 [2024-07-10 23:24:12.705300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.808 [2024-07-10 23:24:12.721563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.808 [2024-07-10 23:24:12.721586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.808 [2024-07-10 23:24:12.733597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.808 [2024-07-10 23:24:12.733621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.808 [2024-07-10 23:24:12.750175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.808 [2024-07-10 23:24:12.750199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.808 [2024-07-10 23:24:12.762322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.808 [2024-07-10 23:24:12.762345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.808 [2024-07-10 23:24:12.773082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.808 [2024-07-10 23:24:12.773106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.808 [2024-07-10 23:24:12.789325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.808 [2024-07-10 23:24:12.789349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.808 [2024-07-10 23:24:12.805840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.808 [2024-07-10 23:24:12.805864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.808 [2024-07-10 23:24:12.822429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.808 [2024-07-10 23:24:12.822454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.808 [2024-07-10 23:24:12.833099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.808 [2024-07-10 23:24:12.833122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.808 [2024-07-10 23:24:12.849782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.808 [2024-07-10 23:24:12.849807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.808 [2024-07-10 23:24:12.860819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.808 [2024-07-10 23:24:12.860843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.068 [2024-07-10 23:24:12.876431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.068 [2024-07-10 23:24:12.876455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.068 [2024-07-10 23:24:12.893079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.068 [2024-07-10 23:24:12.893102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.068 [2024-07-10 23:24:12.904835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.068 [2024-07-10 23:24:12.904859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.068 [2024-07-10 23:24:12.921692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.068 [2024-07-10 23:24:12.921716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.068 [2024-07-10 23:24:12.937715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.068 [2024-07-10 23:24:12.937739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.068 [2024-07-10 23:24:12.954658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.068 [2024-07-10 23:24:12.954682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.068 [2024-07-10 23:24:12.971168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.068 [2024-07-10 23:24:12.971192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.068 [2024-07-10 23:24:12.982108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.068 [2024-07-10 23:24:12.982136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.068 [2024-07-10 23:24:12.998731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.068 [2024-07-10 23:24:12.998755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.068 [2024-07-10 23:24:13.015041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.068 [2024-07-10 23:24:13.015065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.068 [2024-07-10 23:24:13.031157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.068 [2024-07-10 23:24:13.031188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.068 [2024-07-10 23:24:13.043560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.068 [2024-07-10 23:24:13.043583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.068 [2024-07-10 23:24:13.059846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.068 [2024-07-10 23:24:13.059870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.068 [2024-07-10 23:24:13.075667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.068 [2024-07-10 23:24:13.075691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.068 [2024-07-10 23:24:13.092967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.068 [2024-07-10 23:24:13.092991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.068 [2024-07-10 23:24:13.109019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.068 [2024-07-10 23:24:13.109042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.068 [2024-07-10 23:24:13.125506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.068 [2024-07-10 23:24:13.125530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.141917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.141942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.158471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.158497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.174654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.174678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.183696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.183720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.194801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.194826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.210638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.210664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.226970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.226994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.238912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.238936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.256231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.256256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.271717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.271745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.283661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.283685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.299980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.300005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.316484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.316508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.328269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.328295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.343732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.343757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.355293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.355318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.371745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.371770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.328 [2024-07-10 23:24:13.388313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.328 [2024-07-10 23:24:13.388338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.587 [2024-07-10 23:24:13.399374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.587 [2024-07-10 23:24:13.399399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.587 [2024-07-10 23:24:13.415258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.587 [2024-07-10 23:24:13.415282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.587 [2024-07-10 23:24:13.431292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.587 [2024-07-10 23:24:13.431316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.587 [2024-07-10 23:24:13.447494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.587 [2024-07-10 23:24:13.447518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.587 [2024-07-10 23:24:13.462396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.587 [2024-07-10 23:24:13.462421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.587 [2024-07-10 23:24:13.471167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.587 [2024-07-10 23:24:13.471190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.587 [2024-07-10 23:24:13.488073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.587 [2024-07-10 23:24:13.488098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.587 [2024-07-10 23:24:13.504433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.587 [2024-07-10 23:24:13.504457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.587 [2024-07-10 23:24:13.521205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.587 [2024-07-10 23:24:13.521230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.587 [2024-07-10 23:24:13.532308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.587 [2024-07-10 23:24:13.532333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.587 [2024-07-10 23:24:13.548738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.587 [2024-07-10 23:24:13.548763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.587 [2024-07-10 23:24:13.565351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.588 [2024-07-10 23:24:13.565375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.588 [2024-07-10 23:24:13.582203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.588 [2024-07-10 23:24:13.582227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.588 [2024-07-10 23:24:13.598844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.588 [2024-07-10 23:24:13.598868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.588 [2024-07-10 23:24:13.615351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.588 [2024-07-10 23:24:13.615376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.588 [2024-07-10 23:24:13.631776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.588 [2024-07-10 23:24:13.631800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.588 [2024-07-10 23:24:13.648148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.588 [2024-07-10 23:24:13.648180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.846 [2024-07-10 23:24:13.664769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.846 [2024-07-10 23:24:13.664793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.846 [2024-07-10 23:24:13.681048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.846 [2024-07-10 23:24:13.681072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.846 [2024-07-10 23:24:13.696474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.846 [2024-07-10 23:24:13.696498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.846 [2024-07-10 23:24:13.713152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.846 [2024-07-10 23:24:13.713199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.846 [2024-07-10 23:24:13.729974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.846 [2024-07-10 23:24:13.729999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.846 [2024-07-10 23:24:13.740871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.846 [2024-07-10 23:24:13.740895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.846 [2024-07-10 23:24:13.757814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.846 [2024-07-10 23:24:13.757838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.846 [2024-07-10 23:24:13.773901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.846 [2024-07-10 23:24:13.773925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.846 [2024-07-10 23:24:13.790733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.846 [2024-07-10 23:24:13.790758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.846 [2024-07-10 23:24:13.807072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.846 [2024-07-10 23:24:13.807096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.846 [2024-07-10 23:24:13.823930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.846 [2024-07-10 23:24:13.823955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.846 [2024-07-10 23:24:13.839911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.846 [2024-07-10 23:24:13.839935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.846 [2024-07-10 23:24:13.854640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.846 [2024-07-10 23:24:13.854664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.846 [2024-07-10 23:24:13.866281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.846 [2024-07-10 23:24:13.866305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.846 [2024-07-10 23:24:13.882816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.846 [2024-07-10 23:24:13.882840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.847 [2024-07-10 23:24:13.893717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.847 [2024-07-10 23:24:13.893741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.847 [2024-07-10 23:24:13.910649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.847 [2024-07-10 23:24:13.910673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.105 [2024-07-10 23:24:13.926795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.105 [2024-07-10 23:24:13.926819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.105 [2024-07-10 23:24:13.943223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.105 [2024-07-10 23:24:13.943247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.105 [2024-07-10 23:24:13.955334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.105 [2024-07-10 23:24:13.955358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.105 [2024-07-10 23:24:13.971573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.105 [2024-07-10 23:24:13.971597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.105 [2024-07-10 23:24:13.987604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.105 [2024-07-10 23:24:13.987628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.105 [2024-07-10 23:24:14.004568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.105 [2024-07-10 23:24:14.004592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.105 [2024-07-10 23:24:14.020926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.105 [2024-07-10 23:24:14.020950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.105 [2024-07-10 23:24:14.032149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.105 [2024-07-10 23:24:14.032182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.105 [2024-07-10 23:24:14.049004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.105 [2024-07-10 23:24:14.049029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.105 [2024-07-10 23:24:14.063279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.105 [2024-07-10 23:24:14.063302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.105 [2024-07-10 23:24:14.079099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.105 [2024-07-10 23:24:14.079124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.105 [2024-07-10 23:24:14.095918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.105 [2024-07-10 23:24:14.095942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.105 [2024-07-10 23:24:14.112796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.105 [2024-07-10 23:24:14.112820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.105 [2024-07-10 23:24:14.128946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.105 [2024-07-10 23:24:14.128970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.105 [2024-07-10 23:24:14.145643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.105 [2024-07-10 23:24:14.145667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.105 [2024-07-10 23:24:14.162384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.105 [2024-07-10 23:24:14.162409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.364 [2024-07-10 23:24:14.178800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.364 [2024-07-10 23:24:14.178825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.364 [2024-07-10 23:24:14.195394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.364 [2024-07-10 23:24:14.195419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.364 [2024-07-10 23:24:14.206318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.364 [2024-07-10 23:24:14.206341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.364 [2024-07-10 23:24:14.223257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.364 [2024-07-10 23:24:14.223281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.364 [2024-07-10 23:24:14.237878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.364 [2024-07-10 23:24:14.237902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.364 [2024-07-10 23:24:14.253301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.364 [2024-07-10 23:24:14.253325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.364 [2024-07-10 23:24:14.269789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.364 [2024-07-10 23:24:14.269813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.364 [2024-07-10 23:24:14.286074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.364 [2024-07-10 23:24:14.286097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.364 [2024-07-10 23:24:14.302591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.364 [2024-07-10 23:24:14.302615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.364 [2024-07-10 23:24:14.314679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.364 [2024-07-10 23:24:14.314702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.364 [2024-07-10 23:24:14.330380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.364 [2024-07-10 23:24:14.330404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.364 [2024-07-10 23:24:14.346723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.364 [2024-07-10 23:24:14.346748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.364 [2024-07-10 23:24:14.363478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.364 [2024-07-10 23:24:14.363503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.364 [2024-07-10 23:24:14.380254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.364 [2024-07-10 23:24:14.380279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.364 [2024-07-10 23:24:14.396636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.364 [2024-07-10 23:24:14.396661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.364 [2024-07-10 23:24:14.412879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.364 [2024-07-10 23:24:14.412903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.364 [2024-07-10 23:24:14.429029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.364 [2024-07-10 23:24:14.429057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.623 [2024-07-10 23:24:14.445753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.623 [2024-07-10 23:24:14.445778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.623 [2024-07-10 23:24:14.462568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.623 [2024-07-10 23:24:14.462592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.623 [2024-07-10 23:24:14.478861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.623 [2024-07-10 23:24:14.478885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.623 [2024-07-10 23:24:14.495229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.623 [2024-07-10 23:24:14.495254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.623 [2024-07-10 23:24:14.511437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.623 [2024-07-10 23:24:14.511460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.623 [2024-07-10 23:24:14.528293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.623 [2024-07-10 23:24:14.528317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.623 [2024-07-10 23:24:14.544998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.623 [2024-07-10 23:24:14.545023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.623 [2024-07-10 23:24:14.562065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.623 [2024-07-10 23:24:14.562090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.623 [2024-07-10 23:24:14.578414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.623 [2024-07-10 23:24:14.578440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.623 [2024-07-10 23:24:14.590144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.623 [2024-07-10 23:24:14.590187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.623 [2024-07-10 23:24:14.605650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.623 [2024-07-10 23:24:14.605677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.623 [2024-07-10 23:24:14.621984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.623 [2024-07-10 23:24:14.622010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.623 [2024-07-10 23:24:14.631055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.623 [2024-07-10 23:24:14.631080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.623 [2024-07-10 23:24:14.647829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.623 [2024-07-10 23:24:14.647855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.623 [2024-07-10 23:24:14.664502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.623 [2024-07-10 23:24:14.664527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.623 [2024-07-10 23:24:14.680889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.623 [2024-07-10 23:24:14.680912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.882 [2024-07-10 23:24:14.697623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.882 [2024-07-10 23:24:14.697648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.882 [2024-07-10 23:24:14.709615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.882 [2024-07-10 23:24:14.709640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.882 [2024-07-10 23:24:14.725437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.882 [2024-07-10 23:24:14.725465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.882 [2024-07-10 23:24:14.742052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.882 [2024-07-10 23:24:14.742077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.882 [2024-07-10 23:24:14.758478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.882 [2024-07-10 23:24:14.758502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.882 [2024-07-10 23:24:14.774891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.882 [2024-07-10 23:24:14.774915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.882 [2024-07-10 23:24:14.791567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.882 [2024-07-10 23:24:14.791592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.882 [2024-07-10 23:24:14.808083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.882 [2024-07-10 23:24:14.808109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.882 [2024-07-10 23:24:14.824692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.882 [2024-07-10 23:24:14.824718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.882 [2024-07-10 23:24:14.835727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.882 [2024-07-10 23:24:14.835752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.882 [2024-07-10 23:24:14.852036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.882 [2024-07-10 23:24:14.852061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.882 [2024-07-10 23:24:14.868641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.882 [2024-07-10 23:24:14.868666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.882 [2024-07-10 23:24:14.884953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.882 [2024-07-10 23:24:14.884978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.882 [2024-07-10 23:24:14.901609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.882 [2024-07-10 23:24:14.901634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.882 [2024-07-10 23:24:14.918135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.882 [2024-07-10 23:24:14.918166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.883 [2024-07-10 23:24:14.934245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.883 [2024-07-10 23:24:14.934269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.883 [2024-07-10 23:24:14.948738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.883 [2024-07-10 23:24:14.948763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.142 [2024-07-10 23:24:14.964681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.142 [2024-07-10 23:24:14.964705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.142 [2024-07-10 23:24:14.981248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.142 [2024-07-10 23:24:14.981274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.142 [2024-07-10 23:24:14.997846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.142 [2024-07-10 23:24:14.997870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.142 [2024-07-10 23:24:15.014319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.142 [2024-07-10 23:24:15.014343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.142 [2024-07-10 23:24:15.030674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.142 [2024-07-10 23:24:15.030702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.142 [2024-07-10 23:24:15.039889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.142 [2024-07-10 23:24:15.039912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.142 [2024-07-10 23:24:15.050890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.142 [2024-07-10 23:24:15.050913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.142 [2024-07-10 23:24:15.066621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.142 [2024-07-10 23:24:15.066646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.142 [2024-07-10 23:24:15.083530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.142 [2024-07-10 23:24:15.083554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.142 [2024-07-10 23:24:15.100217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.142 [2024-07-10 23:24:15.100241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.142 [2024-07-10 23:24:15.111257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.142 [2024-07-10 23:24:15.111281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.142 [2024-07-10 23:24:15.127980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.142 [2024-07-10 23:24:15.128005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.142 [2024-07-10 23:24:15.143528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.142 [2024-07-10 23:24:15.143552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.142 [2024-07-10 23:24:15.154616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.142 [2024-07-10 23:24:15.154639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.142 [2024-07-10 23:24:15.170437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.142 [2024-07-10 23:24:15.170462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.142 [2024-07-10 23:24:15.181671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.142 [2024-07-10 23:24:15.181695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.142 [2024-07-10 23:24:15.197612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.142 [2024-07-10 23:24:15.197636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.401 [2024-07-10 23:24:15.213985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.401 [2024-07-10 23:24:15.214009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.401 [2024-07-10 23:24:15.230439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.401 [2024-07-10 23:24:15.230462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.401 [2024-07-10 23:24:15.246971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.401 [2024-07-10 23:24:15.246995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.401 [2024-07-10 23:24:15.263398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.401 [2024-07-10 23:24:15.263422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.401 [2024-07-10 23:24:15.274157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.401 [2024-07-10 23:24:15.274191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.401 [2024-07-10 23:24:15.290420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.401 [2024-07-10 23:24:15.290444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.401 [2024-07-10 23:24:15.306868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.401 [2024-07-10 23:24:15.306896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.401 [2024-07-10 23:24:15.323491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.401 [2024-07-10 23:24:15.323514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.401 [2024-07-10 23:24:15.335421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.401 [2024-07-10 23:24:15.335444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.401 [2024-07-10 23:24:15.352273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.401 [2024-07-10 23:24:15.352297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.401 [2024-07-10 23:24:15.368068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.401 [2024-07-10 23:24:15.368092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.401 [2024-07-10 23:24:15.380117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.401 [2024-07-10 23:24:15.380142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.401 [2024-07-10 23:24:15.395622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.401 [2024-07-10 23:24:15.395648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.401 [2024-07-10 23:24:15.406585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.401 [2024-07-10 23:24:15.406611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.401 [2024-07-10 23:24:15.423194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.401 [2024-07-10 23:24:15.423219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.401 [2024-07-10 23:24:15.439325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.401 [2024-07-10 23:24:15.439349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.401 [2024-07-10 23:24:15.456055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.401 [2024-07-10 23:24:15.456080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.660 [2024-07-10 23:24:15.472806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.660 [2024-07-10 23:24:15.472830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.660 [2024-07-10 23:24:15.489069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.660 [2024-07-10 23:24:15.489093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.661 [2024-07-10 23:24:15.505558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.661 [2024-07-10 23:24:15.505583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.661 [2024-07-10 23:24:15.521793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.661 [2024-07-10 23:24:15.521817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.661 [2024-07-10 23:24:15.538256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.661 [2024-07-10 23:24:15.538280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.661 [2024-07-10 23:24:15.554481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.661 [2024-07-10 23:24:15.554505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.661 [2024-07-10 23:24:15.566758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.661 [2024-07-10 23:24:15.566782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.661 [2024-07-10 23:24:15.581954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.661 [2024-07-10 23:24:15.581977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.661 [2024-07-10 23:24:15.593165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.661 [2024-07-10 23:24:15.593193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.661 [2024-07-10 23:24:15.609787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.661 [2024-07-10 23:24:15.609810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.661 [2024-07-10 23:24:15.625804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.661 [2024-07-10 23:24:15.625829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.661 [2024-07-10 23:24:15.642456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.661 [2024-07-10 23:24:15.642480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.661 [2024-07-10 23:24:15.659140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.661 [2024-07-10 23:24:15.659171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.661 [2024-07-10 23:24:15.675461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.661 [2024-07-10 23:24:15.675486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.661 [2024-07-10 23:24:15.692520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.661 [2024-07-10 23:24:15.692545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.661 [2024-07-10 23:24:15.708941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.661 [2024-07-10 23:24:15.708966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.661 [2024-07-10 23:24:15.719912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.661 [2024-07-10 23:24:15.719936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.920 [2024-07-10 23:24:15.736590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.920 [2024-07-10 23:24:15.736614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.920 [2024-07-10 23:24:15.751781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.920 [2024-07-10 23:24:15.751805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.920 [2024-07-10 23:24:15.768010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.920 [2024-07-10 23:24:15.768035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.920 [2024-07-10 23:24:15.784301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.920 [2024-07-10 23:24:15.784326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.920 [2024-07-10 23:24:15.800131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.920 [2024-07-10 23:24:15.800156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.920 [2024-07-10 23:24:15.816931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.920 [2024-07-10 23:24:15.816955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.920 [2024-07-10 23:24:15.828909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.920 [2024-07-10 23:24:15.828933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.920 [2024-07-10 23:24:15.844790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.920 [2024-07-10 23:24:15.844814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.920 [2024-07-10 23:24:15.861102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.920 [2024-07-10 23:24:15.861126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.920 [2024-07-10 23:24:15.872947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.920 [2024-07-10 23:24:15.872971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.920 [2024-07-10 23:24:15.889094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.920 [2024-07-10 23:24:15.889117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.920 [2024-07-10 23:24:15.900327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.920 [2024-07-10 23:24:15.900350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.920 00:20:06.920 Latency(us) 00:20:06.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.920 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:20:06.920 Nvme1n1 : 5.01 14107.70 110.22 0.00 0.00 9064.32 4017.64 19603.81 00:20:06.921 =================================================================================================================== 00:20:06.921 Total : 14107.70 110.22 0.00 0.00 9064.32 4017.64 19603.81 00:20:06.921 [2024-07-10 23:24:15.911344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.921 [2024-07-10 23:24:15.911366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.921 [2024-07-10 23:24:15.923364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.921 [2024-07-10 23:24:15.923396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.921 [2024-07-10 23:24:15.935414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.921 [2024-07-10 23:24:15.935433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.921 [2024-07-10 23:24:15.947435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.921 [2024-07-10 23:24:15.947454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.921 [2024-07-10 23:24:15.959488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.921 [2024-07-10 23:24:15.959518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.921 [2024-07-10 23:24:15.971532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.921 [2024-07-10 23:24:15.971558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.921 [2024-07-10 23:24:15.983547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.921 [2024-07-10 23:24:15.983570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:15.995576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:15.995597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.007615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.007637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.019618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.019637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.031657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.031678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.043704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.043726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.055728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.055749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.067773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.067794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.079786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.079805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.091834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.091854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.103864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.103883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.115882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.115901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.127926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.127945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.139956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.139974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.151990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.152009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.164029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.164048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.176046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.176065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.188091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.188110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.200125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.200145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.212150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.212179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.224233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.224255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.180 [2024-07-10 23:24:16.236249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.180 [2024-07-10 23:24:16.236268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.439 [2024-07-10 23:24:16.248258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.439 [2024-07-10 23:24:16.248277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.439 [2024-07-10 23:24:16.260290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.439 [2024-07-10 23:24:16.260309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.439 [2024-07-10 23:24:16.272315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.439 [2024-07-10 23:24:16.272334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.439 [2024-07-10 23:24:16.284391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.439 [2024-07-10 23:24:16.284420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.439 [2024-07-10 23:24:16.296440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.439 [2024-07-10 23:24:16.296466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.439 [2024-07-10 23:24:16.308426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.439 [2024-07-10 23:24:16.308448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.439 [2024-07-10 23:24:16.320469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.439 [2024-07-10 23:24:16.320490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.439 [2024-07-10 23:24:16.332495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.439 [2024-07-10 23:24:16.332515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.439 [2024-07-10 23:24:16.344534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.439 [2024-07-10 23:24:16.344553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.439 [2024-07-10 23:24:16.356558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.439 [2024-07-10 23:24:16.356577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.439 [2024-07-10 23:24:16.368581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.439 [2024-07-10 23:24:16.368600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.439 [2024-07-10 23:24:16.380632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.439 [2024-07-10 23:24:16.380650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.439 [2024-07-10 23:24:16.400683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.439 [2024-07-10 23:24:16.400703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.439 [2024-07-10 23:24:16.412701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.440 [2024-07-10 23:24:16.412722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.440 [2024-07-10 23:24:16.424747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.440 [2024-07-10 23:24:16.424766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.440 [2024-07-10 23:24:16.436781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.440 [2024-07-10 23:24:16.436800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.440 [2024-07-10 23:24:16.448813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.440 [2024-07-10 23:24:16.448832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.440 [2024-07-10 23:24:16.460841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.440 [2024-07-10 23:24:16.460859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.440 [2024-07-10 23:24:16.472877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.440 [2024-07-10 23:24:16.472899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.440 [2024-07-10 23:24:16.484924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.440 [2024-07-10 23:24:16.484944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.440 [2024-07-10 23:24:16.496948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.440 [2024-07-10 23:24:16.496967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.699 [2024-07-10 23:24:16.508963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.699 [2024-07-10 23:24:16.508983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.699 [2024-07-10 23:24:16.521010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.699 [2024-07-10 23:24:16.521029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.699 [2024-07-10 23:24:16.533040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.699 [2024-07-10 23:24:16.533062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.699 [2024-07-10 23:24:16.545065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.699 [2024-07-10 23:24:16.545084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.699 [2024-07-10 23:24:16.557110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.699 [2024-07-10 23:24:16.557129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.699 [2024-07-10 23:24:16.569131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.699 [2024-07-10 23:24:16.569150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.699 [2024-07-10 23:24:16.581186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.699 [2024-07-10 23:24:16.581206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.699 [2024-07-10 23:24:16.593233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.699 [2024-07-10 23:24:16.593253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.699 [2024-07-10 23:24:16.605266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.699 [2024-07-10 23:24:16.605286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.699 [2024-07-10 23:24:16.617286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.699 [2024-07-10 23:24:16.617305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.699 [2024-07-10 23:24:16.629319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.699 [2024-07-10 23:24:16.629339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.699 [2024-07-10 23:24:16.641338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.699 [2024-07-10 23:24:16.641356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.699 [2024-07-10 23:24:16.653396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.699 [2024-07-10 23:24:16.653415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.699 [2024-07-10 23:24:16.665420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.699 [2024-07-10 23:24:16.665439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.699 [2024-07-10 23:24:16.677460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.699 [2024-07-10 23:24:16.677478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.699 [2024-07-10 23:24:16.689493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.699 [2024-07-10 23:24:16.689513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.700 [2024-07-10 23:24:16.701508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.700 [2024-07-10 23:24:16.701527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.700 [2024-07-10 23:24:16.713567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.700 [2024-07-10 23:24:16.713586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.700 [2024-07-10 23:24:16.725581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.700 [2024-07-10 23:24:16.725599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.700 [2024-07-10 23:24:16.737606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.700 [2024-07-10 23:24:16.737625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.700 [2024-07-10 23:24:16.749653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.700 [2024-07-10 23:24:16.749671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.700 [2024-07-10 23:24:16.761666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.700 [2024-07-10 23:24:16.761688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.773713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.773732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.785760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.785779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.797767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.797786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.809810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.809830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.821849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.821869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.833863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.833883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.845978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.845998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.857933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.857953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.869979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.869998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.882020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.882039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.894038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.894057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.906084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.906103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.918119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.918138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.930137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.930156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.942188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.942207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.954210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.954228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.966254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.966274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.978287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.978306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 [2024-07-10 23:24:16.990305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.959 [2024-07-10 23:24:16.990327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2422133) - No such process 00:20:07.959 23:24:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2422133 00:20:07.959 23:24:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:07.959 23:24:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.959 23:24:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:07.959 23:24:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.959 23:24:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:07.959 23:24:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.959 23:24:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:07.959 delay0 00:20:07.959 23:24:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.959 23:24:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:20:07.959 23:24:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.959 23:24:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:07.959 23:24:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.959 23:24:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:20:08.218 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.218 [2024-07-10 23:24:17.156090] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:20:14.811 [2024-07-10 23:24:23.390661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:20:14.811 Initializing NVMe Controllers 00:20:14.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:14.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:14.811 Initialization complete. Launching workers. 00:20:14.811 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 292, failed: 7326 00:20:14.811 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7557, failed to submit 61 00:20:14.811 success 7428, unsuccess 129, failed 0 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:14.811 rmmod nvme_tcp 00:20:14.811 rmmod nvme_fabrics 00:20:14.811 rmmod nvme_keyring 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2419550 ']' 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2419550 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2419550 ']' 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2419550 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2419550 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2419550' 00:20:14.811 killing process with pid 2419550 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2419550 00:20:14.811 23:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2419550 00:20:16.189 23:24:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:16.189 23:24:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:16.189 23:24:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:16.189 23:24:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:16.189 23:24:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:16.189 23:24:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.189 23:24:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.189 23:24:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.096 23:24:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:18.096 00:20:18.096 real 0m35.360s 00:20:18.096 user 0m49.843s 00:20:18.096 sys 0m10.352s 00:20:18.096 23:24:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:18.096 23:24:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:18.096 ************************************ 00:20:18.096 END TEST nvmf_zcopy 00:20:18.096 ************************************ 00:20:18.096 23:24:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:18.096 23:24:26 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:18.096 23:24:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:18.096 23:24:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:18.096 23:24:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:18.096 ************************************ 00:20:18.096 START TEST nvmf_nmic 00:20:18.096 ************************************ 00:20:18.096 23:24:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:18.096 * Looking for test storage... 00:20:18.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:18.096 23:24:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:18.096 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:20:18.096 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.096 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.096 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.096 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.096 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:20:18.097 23:24:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:23.373 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:23.373 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:23.373 Found net devices under 0000:86:00.0: cvl_0_0 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:23.373 Found net devices under 0000:86:00.1: cvl_0_1 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.373 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:23.374 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:23.374 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:23.374 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:23.374 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:23.374 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:23.374 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.374 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:23.374 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:23.374 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:23.374 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:23.374 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:23.374 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:23.374 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:23.374 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:23.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:20:23.633 00:20:23.633 --- 10.0.0.2 ping statistics --- 00:20:23.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.633 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:23.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:20:23.633 00:20:23.633 --- 10.0.0.1 ping statistics --- 00:20:23.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.633 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2427963 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2427963 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2427963 ']' 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:23.633 23:24:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:23.633 [2024-07-10 23:24:32.607673] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:20:23.633 [2024-07-10 23:24:32.607786] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.633 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.892 [2024-07-10 23:24:32.718746] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:23.892 [2024-07-10 23:24:32.929680] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.892 [2024-07-10 23:24:32.929723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.892 [2024-07-10 23:24:32.929736] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.892 [2024-07-10 23:24:32.929745] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.893 [2024-07-10 23:24:32.929754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.893 [2024-07-10 23:24:32.929822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.893 [2024-07-10 23:24:32.929832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.893 [2024-07-10 23:24:32.929857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.893 [2024-07-10 23:24:32.929863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.461 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:24.461 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:20:24.461 23:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:24.461 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:24.461 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:24.461 23:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.461 23:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:24.461 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.461 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:24.461 [2024-07-10 23:24:33.424712] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.461 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.461 23:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:24.461 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.461 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:24.461 Malloc0 00:20:24.461 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.461 23:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:24.461 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.461 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:24.720 [2024-07-10 23:24:33.546721] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:20:24.720 test case1: single bdev can't be used in multiple subsystems 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:24.720 [2024-07-10 23:24:33.570610] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:20:24.720 [2024-07-10 23:24:33.570645] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:20:24.720 [2024-07-10 23:24:33.570659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.720 request: 00:20:24.720 { 00:20:24.720 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:24.720 "namespace": { 00:20:24.720 "bdev_name": "Malloc0", 00:20:24.720 "no_auto_visible": false 00:20:24.720 }, 00:20:24.720 "method": "nvmf_subsystem_add_ns", 00:20:24.720 "req_id": 1 00:20:24.720 } 00:20:24.720 Got JSON-RPC error response 00:20:24.720 response: 00:20:24.720 { 00:20:24.720 "code": -32602, 00:20:24.720 "message": "Invalid parameters" 00:20:24.720 } 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:20:24.720 Adding namespace failed - expected result. 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:20:24.720 test case2: host connect to nvmf target in multiple paths 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:24.720 [2024-07-10 23:24:33.582783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.720 23:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:25.656 23:24:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:20:27.097 23:24:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:20:27.097 23:24:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:20:27.097 23:24:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:27.097 23:24:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:27.097 23:24:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:20:29.027 23:24:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:29.027 23:24:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:29.027 23:24:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:29.027 23:24:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:29.027 23:24:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:29.027 23:24:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:20:29.027 23:24:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:29.027 [global] 00:20:29.027 thread=1 00:20:29.027 invalidate=1 00:20:29.027 rw=write 00:20:29.027 time_based=1 00:20:29.027 runtime=1 00:20:29.027 ioengine=libaio 00:20:29.027 direct=1 00:20:29.027 bs=4096 00:20:29.027 iodepth=1 00:20:29.027 norandommap=0 00:20:29.027 numjobs=1 00:20:29.027 00:20:29.027 verify_dump=1 00:20:29.027 verify_backlog=512 00:20:29.027 verify_state_save=0 00:20:29.027 do_verify=1 00:20:29.027 verify=crc32c-intel 00:20:29.027 [job0] 00:20:29.027 filename=/dev/nvme0n1 00:20:29.027 Could not set queue depth (nvme0n1) 00:20:29.283 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:29.283 fio-3.35 00:20:29.283 Starting 1 thread 00:20:30.211 00:20:30.211 job0: (groupid=0, jobs=1): err= 0: pid=2429045: Wed Jul 10 23:24:39 2024 00:20:30.211 read: IOPS=1608, BW=6434KiB/s (6588kB/s)(6440KiB/1001msec) 00:20:30.211 slat (nsec): min=7071, max=37478, avg=8015.22, stdev=1463.20 00:20:30.211 clat (usec): min=266, max=520, avg=302.75, stdev=32.97 00:20:30.211 lat (usec): min=275, max=528, avg=310.76, stdev=33.10 00:20:30.211 clat percentiles (usec): 00:20:30.211 | 1.00th=[ 273], 5.00th=[ 277], 10.00th=[ 277], 20.00th=[ 281], 00:20:30.211 | 30.00th=[ 285], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:20:30.211 | 70.00th=[ 302], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 351], 00:20:30.211 | 99.00th=[ 449], 99.50th=[ 482], 99.90th=[ 519], 99.95th=[ 523], 00:20:30.211 | 99.99th=[ 523] 00:20:30.211 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:20:30.211 slat (usec): min=10, max=24152, avg=23.68, stdev=533.43 00:20:30.211 clat (usec): min=154, max=394, avg=214.73, stdev=24.14 00:20:30.211 lat (usec): min=167, max=24494, avg=238.41, stdev=536.79 00:20:30.211 clat percentiles (usec): 00:20:30.211 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 206], 00:20:30.211 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 221], 00:20:30.211 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 241], 95.00th=[ 243], 00:20:30.211 | 99.00th=[ 260], 99.50th=[ 277], 99.90th=[ 306], 99.95th=[ 343], 00:20:30.211 | 99.99th=[ 396] 00:20:30.211 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:20:30.211 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:30.211 lat (usec) : 250=55.30%, 500=44.56%, 750=0.14% 00:20:30.211 cpu : usr=4.40%, sys=4.60%, ctx=3662, majf=0, minf=2 00:20:30.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.211 issued rwts: total=1610,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:30.211 00:20:30.211 Run status group 0 (all jobs): 00:20:30.211 READ: bw=6434KiB/s (6588kB/s), 6434KiB/s-6434KiB/s (6588kB/s-6588kB/s), io=6440KiB (6595kB), run=1001-1001msec 00:20:30.211 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:20:30.211 00:20:30.211 Disk stats (read/write): 00:20:30.211 nvme0n1: ios=1562/1689, merge=0/0, ticks=1437/342, in_queue=1779, util=98.50% 00:20:30.211 23:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:31.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:31.143 rmmod nvme_tcp 00:20:31.143 rmmod nvme_fabrics 00:20:31.143 rmmod nvme_keyring 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2427963 ']' 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2427963 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2427963 ']' 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2427963 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2427963 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2427963' 00:20:31.143 killing process with pid 2427963 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2427963 00:20:31.143 23:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2427963 00:20:32.517 23:24:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:32.517 23:24:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:32.517 23:24:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:32.517 23:24:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.517 23:24:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.517 23:24:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.517 23:24:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.517 23:24:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.048 23:24:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:35.048 00:20:35.048 real 0m16.576s 00:20:35.048 user 0m39.675s 00:20:35.048 sys 0m4.948s 00:20:35.048 23:24:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:35.048 23:24:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:35.048 ************************************ 00:20:35.048 END TEST nvmf_nmic 00:20:35.048 ************************************ 00:20:35.048 23:24:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:35.048 23:24:43 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:35.048 23:24:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:35.048 23:24:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:35.048 23:24:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:35.048 ************************************ 00:20:35.048 START TEST nvmf_fio_target 00:20:35.048 ************************************ 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:35.048 * Looking for test storage... 00:20:35.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:35.048 23:24:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:35.049 23:24:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:35.049 23:24:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:20:35.049 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:35.049 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.049 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:35.049 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:35.049 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:35.049 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.049 23:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.049 23:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.049 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:35.049 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:35.049 23:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:35.049 23:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.307 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:40.308 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:40.308 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:40.308 Found net devices under 0000:86:00.0: cvl_0_0 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:40.308 Found net devices under 0000:86:00.1: cvl_0_1 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:40.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:20:40.308 00:20:40.308 --- 10.0.0.2 ping statistics --- 00:20:40.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.308 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:40.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:20:40.308 00:20:40.308 --- 10.0.0.1 ping statistics --- 00:20:40.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.308 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2433015 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2433015 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2433015 ']' 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.308 23:24:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.308 [2024-07-10 23:24:49.069793] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:20:40.308 [2024-07-10 23:24:49.069888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.308 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.308 [2024-07-10 23:24:49.180833] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.566 [2024-07-10 23:24:49.392458] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.566 [2024-07-10 23:24:49.392504] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.566 [2024-07-10 23:24:49.392516] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.566 [2024-07-10 23:24:49.392525] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.566 [2024-07-10 23:24:49.392535] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.566 [2024-07-10 23:24:49.392660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.566 [2024-07-10 23:24:49.392774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.566 [2024-07-10 23:24:49.392832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.566 [2024-07-10 23:24:49.392842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.822 23:24:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.822 23:24:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:20:40.822 23:24:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:40.822 23:24:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:40.822 23:24:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.822 23:24:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.822 23:24:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:41.077 [2024-07-10 23:24:50.045897] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.077 23:24:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:41.333 23:24:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:41.333 23:24:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:41.590 23:24:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:41.590 23:24:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:41.846 23:24:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:41.846 23:24:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:42.102 23:24:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:42.102 23:24:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:42.359 23:24:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:42.616 23:24:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:42.616 23:24:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:42.872 23:24:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:42.872 23:24:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:43.129 23:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:43.129 23:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:43.385 23:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:43.385 23:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:43.385 23:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:43.642 23:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:43.642 23:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:43.899 23:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:44.156 [2024-07-10 23:24:52.970046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.156 23:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:44.156 23:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:44.413 23:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:45.781 23:24:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:45.781 23:24:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:20:45.781 23:24:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:45.781 23:24:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:20:45.781 23:24:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:20:45.781 23:24:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:20:47.701 23:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:47.701 23:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:47.701 23:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:47.701 23:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:20:47.701 23:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:47.701 23:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:20:47.701 23:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:47.701 [global] 00:20:47.701 thread=1 00:20:47.701 invalidate=1 00:20:47.701 rw=write 00:20:47.701 time_based=1 00:20:47.701 runtime=1 00:20:47.701 ioengine=libaio 00:20:47.701 direct=1 00:20:47.701 bs=4096 00:20:47.701 iodepth=1 00:20:47.701 norandommap=0 00:20:47.701 numjobs=1 00:20:47.701 00:20:47.701 verify_dump=1 00:20:47.701 verify_backlog=512 00:20:47.701 verify_state_save=0 00:20:47.701 do_verify=1 00:20:47.701 verify=crc32c-intel 00:20:47.701 [job0] 00:20:47.701 filename=/dev/nvme0n1 00:20:47.701 [job1] 00:20:47.701 filename=/dev/nvme0n2 00:20:47.701 [job2] 00:20:47.701 filename=/dev/nvme0n3 00:20:47.701 [job3] 00:20:47.701 filename=/dev/nvme0n4 00:20:47.701 Could not set queue depth (nvme0n1) 00:20:47.701 Could not set queue depth (nvme0n2) 00:20:47.701 Could not set queue depth (nvme0n3) 00:20:47.701 Could not set queue depth (nvme0n4) 00:20:47.975 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:47.975 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:47.975 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:47.975 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:47.975 fio-3.35 00:20:47.975 Starting 4 threads 00:20:49.378 00:20:49.378 job0: (groupid=0, jobs=1): err= 0: pid=2434381: Wed Jul 10 23:24:58 2024 00:20:49.378 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:20:49.378 slat (nsec): min=13013, max=24372, avg=21798.36, stdev=2041.91 00:20:49.378 clat (usec): min=40780, max=41022, avg=40959.32, stdev=46.00 00:20:49.378 lat (usec): min=40793, max=41044, avg=40981.12, stdev=47.68 00:20:49.378 clat percentiles (usec): 00:20:49.378 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:20:49.378 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:49.378 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:49.378 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:20:49.378 | 99.99th=[41157] 00:20:49.378 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:20:49.378 slat (nsec): min=9573, max=47357, avg=12351.67, stdev=2613.59 00:20:49.378 clat (usec): min=170, max=359, avg=235.27, stdev=18.22 00:20:49.378 lat (usec): min=181, max=379, avg=247.62, stdev=19.53 00:20:49.378 clat percentiles (usec): 00:20:49.378 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 221], 00:20:49.378 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 239], 00:20:49.378 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 262], 00:20:49.378 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 359], 99.95th=[ 359], 00:20:49.378 | 99.99th=[ 359] 00:20:49.378 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:20:49.378 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:49.378 lat (usec) : 250=81.27%, 500=14.61% 00:20:49.378 lat (msec) : 50=4.12% 00:20:49.378 cpu : usr=0.00%, sys=1.26%, ctx=536, majf=0, minf=1 00:20:49.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:49.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.378 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:49.378 job1: (groupid=0, jobs=1): err= 0: pid=2434382: Wed Jul 10 23:24:58 2024 00:20:49.378 read: IOPS=21, BW=85.9KiB/s (87.9kB/s)(88.0KiB/1025msec) 00:20:49.378 slat (nsec): min=9688, max=23522, avg=17805.82, stdev=4951.27 00:20:49.378 clat (usec): min=40848, max=42040, avg=41198.39, stdev=422.21 00:20:49.378 lat (usec): min=40869, max=42061, avg=41216.20, stdev=422.98 00:20:49.378 clat percentiles (usec): 00:20:49.378 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:20:49.378 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:49.378 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:20:49.378 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:49.378 | 99.99th=[42206] 00:20:49.378 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:20:49.378 slat (nsec): min=9351, max=42410, avg=11395.98, stdev=2781.75 00:20:49.378 clat (usec): min=177, max=415, avg=214.31, stdev=26.62 00:20:49.378 lat (usec): min=187, max=457, avg=225.71, stdev=27.89 00:20:49.378 clat percentiles (usec): 00:20:49.378 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:20:49.378 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:20:49.378 | 70.00th=[ 219], 80.00th=[ 231], 90.00th=[ 249], 95.00th=[ 265], 00:20:49.378 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 416], 99.95th=[ 416], 00:20:49.378 | 99.99th=[ 416] 00:20:49.378 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:20:49.378 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:49.378 lat (usec) : 250=86.52%, 500=9.36% 00:20:49.378 lat (msec) : 50=4.12% 00:20:49.378 cpu : usr=0.49%, sys=0.29%, ctx=536, majf=0, minf=2 00:20:49.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:49.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.378 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:49.378 job2: (groupid=0, jobs=1): err= 0: pid=2434383: Wed Jul 10 23:24:58 2024 00:20:49.378 read: IOPS=21, BW=84.6KiB/s (86.6kB/s)(88.0KiB/1040msec) 00:20:49.378 slat (nsec): min=10200, max=23757, avg=21633.91, stdev=2623.39 00:20:49.378 clat (usec): min=40892, max=41522, avg=40992.47, stdev=123.13 00:20:49.378 lat (usec): min=40915, max=41532, avg=41014.11, stdev=120.60 00:20:49.378 clat percentiles (usec): 00:20:49.378 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:20:49.378 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:49.378 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:49.378 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:20:49.378 | 99.99th=[41681] 00:20:49.378 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:20:49.378 slat (nsec): min=11082, max=76420, avg=16988.60, stdev=12938.56 00:20:49.378 clat (usec): min=175, max=320, avg=245.48, stdev=18.40 00:20:49.378 lat (usec): min=229, max=355, avg=262.47, stdev=20.00 00:20:49.378 clat percentiles (usec): 00:20:49.378 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 233], 00:20:49.378 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 243], 00:20:49.378 | 70.00th=[ 245], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 281], 00:20:49.378 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 322], 99.95th=[ 322], 00:20:49.378 | 99.99th=[ 322] 00:20:49.378 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:20:49.378 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:49.378 lat (usec) : 250=69.66%, 500=26.22% 00:20:49.378 lat (msec) : 50=4.12% 00:20:49.378 cpu : usr=0.87%, sys=0.48%, ctx=537, majf=0, minf=1 00:20:49.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:49.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.378 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:49.378 job3: (groupid=0, jobs=1): err= 0: pid=2434384: Wed Jul 10 23:24:58 2024 00:20:49.378 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:20:49.378 slat (nsec): min=10788, max=22213, avg=21018.50, stdev=2298.86 00:20:49.378 clat (usec): min=40935, max=41344, avg=40988.96, stdev=81.63 00:20:49.378 lat (usec): min=40957, max=41355, avg=41009.98, stdev=79.43 00:20:49.378 clat percentiles (usec): 00:20:49.378 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:20:49.378 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:49.378 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:49.379 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:20:49.379 | 99.99th=[41157] 00:20:49.379 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:20:49.379 slat (nsec): min=9707, max=45270, avg=13828.00, stdev=4405.60 00:20:49.379 clat (usec): min=182, max=426, avg=231.98, stdev=39.73 00:20:49.379 lat (usec): min=197, max=444, avg=245.81, stdev=41.85 00:20:49.379 clat percentiles (usec): 00:20:49.379 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:20:49.379 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:20:49.379 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 343], 00:20:49.379 | 99.00th=[ 392], 99.50th=[ 396], 99.90th=[ 429], 99.95th=[ 429], 00:20:49.379 | 99.99th=[ 429] 00:20:49.379 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:20:49.379 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:49.379 lat (usec) : 250=84.27%, 500=11.61% 00:20:49.379 lat (msec) : 50=4.12% 00:20:49.379 cpu : usr=0.58%, sys=0.68%, ctx=535, majf=0, minf=1 00:20:49.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:49.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.379 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:49.379 00:20:49.379 Run status group 0 (all jobs): 00:20:49.379 READ: bw=338KiB/s (347kB/s), 84.6KiB/s-85.9KiB/s (86.6kB/s-87.9kB/s), io=352KiB (360kB), run=1025-1040msec 00:20:49.379 WRITE: bw=7877KiB/s (8066kB/s), 1969KiB/s-1998KiB/s (2016kB/s-2046kB/s), io=8192KiB (8389kB), run=1025-1040msec 00:20:49.379 00:20:49.379 Disk stats (read/write): 00:20:49.379 nvme0n1: ios=43/512, merge=0/0, ticks=1682/117, in_queue=1799, util=98.10% 00:20:49.379 nvme0n2: ios=41/512, merge=0/0, ticks=1686/105, in_queue=1791, util=98.47% 00:20:49.379 nvme0n3: ios=42/512, merge=0/0, ticks=1684/115, in_queue=1799, util=98.44% 00:20:49.379 nvme0n4: ios=42/512, merge=0/0, ticks=1683/113, in_queue=1796, util=98.53% 00:20:49.379 23:24:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:49.379 [global] 00:20:49.379 thread=1 00:20:49.379 invalidate=1 00:20:49.379 rw=randwrite 00:20:49.379 time_based=1 00:20:49.379 runtime=1 00:20:49.379 ioengine=libaio 00:20:49.379 direct=1 00:20:49.379 bs=4096 00:20:49.379 iodepth=1 00:20:49.379 norandommap=0 00:20:49.379 numjobs=1 00:20:49.379 00:20:49.379 verify_dump=1 00:20:49.379 verify_backlog=512 00:20:49.379 verify_state_save=0 00:20:49.379 do_verify=1 00:20:49.379 verify=crc32c-intel 00:20:49.379 [job0] 00:20:49.379 filename=/dev/nvme0n1 00:20:49.379 [job1] 00:20:49.379 filename=/dev/nvme0n2 00:20:49.379 [job2] 00:20:49.379 filename=/dev/nvme0n3 00:20:49.379 [job3] 00:20:49.379 filename=/dev/nvme0n4 00:20:49.379 Could not set queue depth (nvme0n1) 00:20:49.379 Could not set queue depth (nvme0n2) 00:20:49.379 Could not set queue depth (nvme0n3) 00:20:49.379 Could not set queue depth (nvme0n4) 00:20:49.379 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:49.379 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:49.379 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:49.379 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:49.379 fio-3.35 00:20:49.379 Starting 4 threads 00:20:50.746 00:20:50.746 job0: (groupid=0, jobs=1): err= 0: pid=2434779: Wed Jul 10 23:24:59 2024 00:20:50.746 read: IOPS=22, BW=90.4KiB/s (92.5kB/s)(92.0KiB/1018msec) 00:20:50.746 slat (nsec): min=9430, max=25108, avg=21351.13, stdev=3617.17 00:20:50.746 clat (usec): min=459, max=41398, avg=39246.71, stdev=8456.37 00:20:50.746 lat (usec): min=484, max=41409, avg=39268.06, stdev=8455.50 00:20:50.746 clat percentiles (usec): 00:20:50.746 | 1.00th=[ 461], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:20:50.746 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:50.746 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:50.746 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:20:50.746 | 99.99th=[41157] 00:20:50.746 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:20:50.746 slat (nsec): min=9210, max=44268, avg=10411.24, stdev=1923.77 00:20:50.746 clat (usec): min=181, max=355, avg=211.48, stdev=17.79 00:20:50.746 lat (usec): min=191, max=400, avg=221.90, stdev=18.56 00:20:50.746 clat percentiles (usec): 00:20:50.746 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 198], 00:20:50.746 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:20:50.746 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 243], 00:20:50.746 | 99.00th=[ 258], 99.50th=[ 277], 99.90th=[ 355], 99.95th=[ 355], 00:20:50.746 | 99.99th=[ 355] 00:20:50.746 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:20:50.746 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:50.746 lat (usec) : 250=92.71%, 500=3.18% 00:20:50.746 lat (msec) : 50=4.11% 00:20:50.746 cpu : usr=0.59%, sys=0.29%, ctx=537, majf=0, minf=1 00:20:50.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:50.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.747 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:50.747 job1: (groupid=0, jobs=1): err= 0: pid=2434790: Wed Jul 10 23:24:59 2024 00:20:50.747 read: IOPS=23, BW=94.8KiB/s (97.0kB/s)(96.0KiB/1013msec) 00:20:50.747 slat (nsec): min=9292, max=23660, avg=21156.00, stdev=4559.37 00:20:50.747 clat (usec): min=343, max=42041, avg=37621.68, stdev=11482.45 00:20:50.747 lat (usec): min=366, max=42064, avg=37642.83, stdev=11481.87 00:20:50.747 clat percentiles (usec): 00:20:50.747 | 1.00th=[ 343], 5.00th=[ 359], 10.00th=[40633], 20.00th=[40633], 00:20:50.747 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:50.747 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:20:50.747 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:50.747 | 99.99th=[42206] 00:20:50.747 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:20:50.747 slat (nsec): min=8883, max=40620, avg=10349.61, stdev=1652.27 00:20:50.747 clat (usec): min=174, max=419, avg=198.93, stdev=17.53 00:20:50.747 lat (usec): min=183, max=460, avg=209.28, stdev=18.39 00:20:50.747 clat percentiles (usec): 00:20:50.747 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 188], 00:20:50.747 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 200], 00:20:50.747 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 215], 95.00th=[ 221], 00:20:50.747 | 99.00th=[ 247], 99.50th=[ 314], 99.90th=[ 420], 99.95th=[ 420], 00:20:50.747 | 99.99th=[ 420] 00:20:50.747 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:20:50.747 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:50.747 lat (usec) : 250=94.59%, 500=1.31% 00:20:50.747 lat (msec) : 50=4.10% 00:20:50.747 cpu : usr=0.10%, sys=0.69%, ctx=537, majf=0, minf=1 00:20:50.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:50.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.747 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:50.747 job2: (groupid=0, jobs=1): err= 0: pid=2434806: Wed Jul 10 23:24:59 2024 00:20:50.747 read: IOPS=21, BW=85.1KiB/s (87.1kB/s)(88.0KiB/1034msec) 00:20:50.747 slat (nsec): min=10231, max=23572, avg=21901.82, stdev=3302.32 00:20:50.747 clat (usec): min=40865, max=42056, avg=41112.62, stdev=357.77 00:20:50.747 lat (usec): min=40888, max=42077, avg=41134.52, stdev=357.93 00:20:50.747 clat percentiles (usec): 00:20:50.747 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:20:50.747 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:50.747 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:20:50.747 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:50.747 | 99.99th=[42206] 00:20:50.747 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:20:50.747 slat (nsec): min=10341, max=69049, avg=16777.31, stdev=13059.30 00:20:50.747 clat (usec): min=157, max=364, avg=231.04, stdev=28.52 00:20:50.747 lat (usec): min=187, max=387, avg=247.82, stdev=31.40 00:20:50.747 clat percentiles (usec): 00:20:50.747 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 204], 20.00th=[ 212], 00:20:50.747 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:20:50.747 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 285], 00:20:50.747 | 99.00th=[ 334], 99.50th=[ 338], 99.90th=[ 363], 99.95th=[ 363], 00:20:50.747 | 99.99th=[ 363] 00:20:50.747 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:20:50.747 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:50.747 lat (usec) : 250=80.15%, 500=15.73% 00:20:50.747 lat (msec) : 50=4.12% 00:20:50.747 cpu : usr=0.39%, sys=0.97%, ctx=534, majf=0, minf=2 00:20:50.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:50.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.747 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:50.747 job3: (groupid=0, jobs=1): err= 0: pid=2434811: Wed Jul 10 23:24:59 2024 00:20:50.747 read: IOPS=195, BW=783KiB/s (802kB/s)(784KiB/1001msec) 00:20:50.747 slat (nsec): min=7539, max=42882, avg=9970.11, stdev=4992.43 00:20:50.747 clat (usec): min=238, max=41036, avg=4430.52, stdev=12349.88 00:20:50.747 lat (usec): min=246, max=41046, avg=4440.49, stdev=12353.88 00:20:50.747 clat percentiles (usec): 00:20:50.747 | 1.00th=[ 241], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 269], 00:20:50.747 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 281], 00:20:50.747 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[41157], 95.00th=[41157], 00:20:50.747 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:20:50.747 | 99.99th=[41157] 00:20:50.747 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:20:50.747 slat (nsec): min=10927, max=45020, avg=12250.89, stdev=2457.28 00:20:50.747 clat (usec): min=172, max=402, avg=234.43, stdev=27.11 00:20:50.747 lat (usec): min=184, max=444, avg=246.68, stdev=27.47 00:20:50.747 clat percentiles (usec): 00:20:50.747 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 206], 20.00th=[ 215], 00:20:50.747 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:20:50.747 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 281], 00:20:50.747 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 404], 99.95th=[ 404], 00:20:50.747 | 99.99th=[ 404] 00:20:50.747 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:20:50.747 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:50.747 lat (usec) : 250=55.79%, 500=41.38% 00:20:50.747 lat (msec) : 50=2.82% 00:20:50.747 cpu : usr=0.60%, sys=1.20%, ctx=710, majf=0, minf=1 00:20:50.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:50.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.747 issued rwts: total=196,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:50.747 00:20:50.747 Run status group 0 (all jobs): 00:20:50.747 READ: bw=1025KiB/s (1050kB/s), 85.1KiB/s-783KiB/s (87.1kB/s-802kB/s), io=1060KiB (1085kB), run=1001-1034msec 00:20:50.747 WRITE: bw=7923KiB/s (8113kB/s), 1981KiB/s-2046KiB/s (2028kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1034msec 00:20:50.747 00:20:50.747 Disk stats (read/write): 00:20:50.747 nvme0n1: ios=49/512, merge=0/0, ticks=1729/110, in_queue=1839, util=98.40% 00:20:50.747 nvme0n2: ios=46/512, merge=0/0, ticks=1729/100, in_queue=1829, util=98.48% 00:20:50.747 nvme0n3: ios=43/512, merge=0/0, ticks=1258/112, in_queue=1370, util=95.53% 00:20:50.747 nvme0n4: ios=41/512, merge=0/0, ticks=1682/118, in_queue=1800, util=98.53% 00:20:50.747 23:24:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:50.747 [global] 00:20:50.747 thread=1 00:20:50.747 invalidate=1 00:20:50.747 rw=write 00:20:50.747 time_based=1 00:20:50.747 runtime=1 00:20:50.747 ioengine=libaio 00:20:50.747 direct=1 00:20:50.747 bs=4096 00:20:50.747 iodepth=128 00:20:50.747 norandommap=0 00:20:50.747 numjobs=1 00:20:50.747 00:20:50.747 verify_dump=1 00:20:50.747 verify_backlog=512 00:20:50.747 verify_state_save=0 00:20:50.747 do_verify=1 00:20:50.747 verify=crc32c-intel 00:20:50.747 [job0] 00:20:50.747 filename=/dev/nvme0n1 00:20:50.747 [job1] 00:20:50.747 filename=/dev/nvme0n2 00:20:50.747 [job2] 00:20:50.747 filename=/dev/nvme0n3 00:20:50.747 [job3] 00:20:50.747 filename=/dev/nvme0n4 00:20:50.747 Could not set queue depth (nvme0n1) 00:20:50.747 Could not set queue depth (nvme0n2) 00:20:50.747 Could not set queue depth (nvme0n3) 00:20:50.747 Could not set queue depth (nvme0n4) 00:20:51.005 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:51.005 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:51.005 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:51.005 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:51.005 fio-3.35 00:20:51.005 Starting 4 threads 00:20:52.377 00:20:52.377 job0: (groupid=0, jobs=1): err= 0: pid=2435230: Wed Jul 10 23:25:01 2024 00:20:52.377 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec) 00:20:52.377 slat (nsec): min=1033, max=22360k, avg=94879.28, stdev=721862.15 00:20:52.377 clat (usec): min=3693, max=42027, avg=12138.75, stdev=4342.71 00:20:52.377 lat (usec): min=3704, max=42056, avg=12233.63, stdev=4395.52 00:20:52.377 clat percentiles (usec): 00:20:52.377 | 1.00th=[ 6128], 5.00th=[ 8225], 10.00th=[ 8979], 20.00th=[ 9896], 00:20:52.377 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10814], 60.00th=[11469], 00:20:52.377 | 70.00th=[12256], 80.00th=[14091], 90.00th=[16909], 95.00th=[19006], 00:20:52.377 | 99.00th=[34866], 99.50th=[34866], 99.90th=[38011], 99.95th=[41681], 00:20:52.377 | 99.99th=[42206] 00:20:52.377 write: IOPS=5797, BW=22.6MiB/s (23.7MB/s)(22.9MiB/1010msec); 0 zone resets 00:20:52.377 slat (usec): min=2, max=8464, avg=73.60, stdev=412.49 00:20:52.377 clat (usec): min=1028, max=30844, avg=10180.70, stdev=3080.17 00:20:52.377 lat (usec): min=1060, max=30848, avg=10254.30, stdev=3105.94 00:20:52.377 clat percentiles (usec): 00:20:52.377 | 1.00th=[ 3097], 5.00th=[ 5342], 10.00th=[ 6390], 20.00th=[ 8455], 00:20:52.377 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10552], 60.00th=[10814], 00:20:52.377 | 70.00th=[10945], 80.00th=[11207], 90.00th=[12387], 95.00th=[13960], 00:20:52.377 | 99.00th=[22152], 99.50th=[26608], 99.90th=[30802], 99.95th=[30802], 00:20:52.377 | 99.99th=[30802] 00:20:52.377 bw ( KiB/s): min=21240, max=24576, per=33.79%, avg=22908.00, stdev=2358.91, samples=2 00:20:52.377 iops : min= 5310, max= 6144, avg=5727.00, stdev=589.73, samples=2 00:20:52.377 lat (msec) : 2=0.07%, 4=1.30%, 10=29.49%, 20=66.89%, 50=2.25% 00:20:52.377 cpu : usr=4.56%, sys=6.34%, ctx=605, majf=0, minf=1 00:20:52.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:20:52.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:52.378 issued rwts: total=5632,5855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:52.378 job1: (groupid=0, jobs=1): err= 0: pid=2435247: Wed Jul 10 23:25:01 2024 00:20:52.378 read: IOPS=2901, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1008msec) 00:20:52.378 slat (nsec): min=1319, max=17988k, avg=137621.14, stdev=967109.35 00:20:52.378 clat (usec): min=3913, max=59800, avg=18375.87, stdev=8407.75 00:20:52.378 lat (usec): min=5877, max=59829, avg=18513.49, stdev=8501.16 00:20:52.378 clat percentiles (usec): 00:20:52.378 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[11076], 00:20:52.378 | 30.00th=[13435], 40.00th=[15139], 50.00th=[16581], 60.00th=[17957], 00:20:52.378 | 70.00th=[19268], 80.00th=[22938], 90.00th=[27657], 95.00th=[35914], 00:20:52.378 | 99.00th=[47973], 99.50th=[48497], 99.90th=[52167], 99.95th=[56886], 00:20:52.378 | 99.99th=[60031] 00:20:52.378 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:20:52.378 slat (usec): min=2, max=16257, avg=180.16, stdev=932.00 00:20:52.378 clat (usec): min=1131, max=58119, avg=24140.02, stdev=15862.09 00:20:52.378 lat (usec): min=1135, max=58131, avg=24320.18, stdev=15985.94 00:20:52.378 clat percentiles (usec): 00:20:52.378 | 1.00th=[ 2212], 5.00th=[ 6587], 10.00th=[ 7635], 20.00th=[10028], 00:20:52.378 | 30.00th=[12256], 40.00th=[16909], 50.00th=[20317], 60.00th=[22152], 00:20:52.378 | 70.00th=[27132], 80.00th=[43254], 90.00th=[51119], 95.00th=[55313], 00:20:52.378 | 99.00th=[57410], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:20:52.378 | 99.99th=[57934] 00:20:52.378 bw ( KiB/s): min= 9712, max=14864, per=18.12%, avg=12288.00, stdev=3643.01, samples=2 00:20:52.378 iops : min= 2428, max= 3716, avg=3072.00, stdev=910.75, samples=2 00:20:52.378 lat (msec) : 2=0.18%, 4=0.77%, 10=12.16%, 20=48.21%, 50=32.10% 00:20:52.378 lat (msec) : 100=6.59% 00:20:52.378 cpu : usr=2.68%, sys=3.97%, ctx=296, majf=0, minf=1 00:20:52.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:20:52.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:52.378 issued rwts: total=2925,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:52.378 job2: (groupid=0, jobs=1): err= 0: pid=2435276: Wed Jul 10 23:25:01 2024 00:20:52.378 read: IOPS=3397, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1003msec) 00:20:52.378 slat (nsec): min=1722, max=7806.9k, avg=119220.30, stdev=674460.14 00:20:52.378 clat (usec): min=954, max=26051, avg=14588.09, stdev=2892.61 00:20:52.378 lat (usec): min=3111, max=26056, avg=14707.31, stdev=2946.59 00:20:52.378 clat percentiles (usec): 00:20:52.378 | 1.00th=[ 6390], 5.00th=[10159], 10.00th=[11600], 20.00th=[12518], 00:20:52.378 | 30.00th=[13829], 40.00th=[14353], 50.00th=[14484], 60.00th=[15008], 00:20:52.378 | 70.00th=[15401], 80.00th=[15926], 90.00th=[17695], 95.00th=[20055], 00:20:52.378 | 99.00th=[22414], 99.50th=[24249], 99.90th=[26084], 99.95th=[26084], 00:20:52.378 | 99.99th=[26084] 00:20:52.378 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:20:52.378 slat (usec): min=2, max=40148, avg=158.73, stdev=1159.03 00:20:52.378 clat (msec): min=4, max=100, avg=18.49, stdev= 6.94 00:20:52.378 lat (msec): min=4, max=100, avg=18.65, stdev= 7.12 00:20:52.378 clat percentiles (msec): 00:20:52.378 | 1.00th=[ 9], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:20:52.378 | 30.00th=[ 15], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 18], 00:20:52.378 | 70.00th=[ 22], 80.00th=[ 26], 90.00th=[ 29], 95.00th=[ 31], 00:20:52.378 | 99.00th=[ 33], 99.50th=[ 33], 99.90th=[ 101], 99.95th=[ 101], 00:20:52.378 | 99.99th=[ 101] 00:20:52.378 bw ( KiB/s): min=12288, max=16384, per=21.15%, avg=14336.00, stdev=2896.31, samples=2 00:20:52.378 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:20:52.378 lat (usec) : 1000=0.01% 00:20:52.378 lat (msec) : 4=0.29%, 10=3.25%, 20=77.27%, 50=19.06%, 100=0.06% 00:20:52.378 lat (msec) : 250=0.06% 00:20:52.378 cpu : usr=2.89%, sys=5.19%, ctx=333, majf=0, minf=1 00:20:52.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:20:52.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:52.378 issued rwts: total=3408,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:52.378 job3: (groupid=0, jobs=1): err= 0: pid=2435286: Wed Jul 10 23:25:01 2024 00:20:52.378 read: IOPS=4121, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1009msec) 00:20:52.378 slat (nsec): min=1547, max=6864.8k, avg=101603.22, stdev=553641.62 00:20:52.378 clat (usec): min=2510, max=24770, avg=12950.45, stdev=2318.49 00:20:52.378 lat (usec): min=6896, max=24772, avg=13052.05, stdev=2355.35 00:20:52.378 clat percentiles (usec): 00:20:52.378 | 1.00th=[ 6980], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11469], 00:20:52.378 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[13304], 00:20:52.378 | 70.00th=[13960], 80.00th=[14484], 90.00th=[15795], 95.00th=[17171], 00:20:52.378 | 99.00th=[19530], 99.50th=[22938], 99.90th=[23200], 99.95th=[23200], 00:20:52.378 | 99.99th=[24773] 00:20:52.378 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:20:52.378 slat (usec): min=2, max=18952, avg=117.09, stdev=657.07 00:20:52.378 clat (usec): min=6338, max=49722, avg=15964.78, stdev=7617.44 00:20:52.378 lat (usec): min=6413, max=49754, avg=16081.87, stdev=7681.16 00:20:52.378 clat percentiles (usec): 00:20:52.378 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[11469], 20.00th=[11994], 00:20:52.378 | 30.00th=[12256], 40.00th=[12649], 50.00th=[12780], 60.00th=[13435], 00:20:52.378 | 70.00th=[14615], 80.00th=[17433], 90.00th=[28705], 95.00th=[35914], 00:20:52.378 | 99.00th=[41157], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:20:52.378 | 99.99th=[49546] 00:20:52.378 bw ( KiB/s): min=16384, max=19960, per=26.80%, avg=18172.00, stdev=2528.61, samples=2 00:20:52.378 iops : min= 4096, max= 4990, avg=4543.00, stdev=632.15, samples=2 00:20:52.378 lat (msec) : 4=0.01%, 10=5.97%, 20=84.88%, 50=9.15% 00:20:52.378 cpu : usr=3.97%, sys=6.45%, ctx=500, majf=0, minf=1 00:20:52.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:52.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:52.378 issued rwts: total=4159,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:52.378 00:20:52.378 Run status group 0 (all jobs): 00:20:52.378 READ: bw=62.4MiB/s (65.4MB/s), 11.3MiB/s-21.8MiB/s (11.9MB/s-22.8MB/s), io=63.0MiB (66.0MB), run=1003-1010msec 00:20:52.378 WRITE: bw=66.2MiB/s (69.4MB/s), 11.9MiB/s-22.6MiB/s (12.5MB/s-23.7MB/s), io=66.9MiB (70.1MB), run=1003-1010msec 00:20:52.378 00:20:52.378 Disk stats (read/write): 00:20:52.378 nvme0n1: ios=4690/5120, merge=0/0, ticks=50989/46166, in_queue=97155, util=98.00% 00:20:52.378 nvme0n2: ios=2258/2560, merge=0/0, ticks=21074/32241, in_queue=53315, util=98.25% 00:20:52.378 nvme0n3: ios=2597/2687, merge=0/0, ticks=19433/27565, in_queue=46998, util=98.38% 00:20:52.378 nvme0n4: ios=3603/4096, merge=0/0, ticks=23112/26350, in_queue=49462, util=98.79% 00:20:52.378 23:25:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:52.378 [global] 00:20:52.378 thread=1 00:20:52.378 invalidate=1 00:20:52.378 rw=randwrite 00:20:52.378 time_based=1 00:20:52.378 runtime=1 00:20:52.378 ioengine=libaio 00:20:52.378 direct=1 00:20:52.378 bs=4096 00:20:52.378 iodepth=128 00:20:52.378 norandommap=0 00:20:52.378 numjobs=1 00:20:52.378 00:20:52.378 verify_dump=1 00:20:52.378 verify_backlog=512 00:20:52.378 verify_state_save=0 00:20:52.378 do_verify=1 00:20:52.378 verify=crc32c-intel 00:20:52.378 [job0] 00:20:52.378 filename=/dev/nvme0n1 00:20:52.378 [job1] 00:20:52.378 filename=/dev/nvme0n2 00:20:52.378 [job2] 00:20:52.378 filename=/dev/nvme0n3 00:20:52.378 [job3] 00:20:52.378 filename=/dev/nvme0n4 00:20:52.378 Could not set queue depth (nvme0n1) 00:20:52.378 Could not set queue depth (nvme0n2) 00:20:52.378 Could not set queue depth (nvme0n3) 00:20:52.378 Could not set queue depth (nvme0n4) 00:20:52.635 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:52.635 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:52.635 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:52.635 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:52.635 fio-3.35 00:20:52.635 Starting 4 threads 00:20:54.009 00:20:54.009 job0: (groupid=0, jobs=1): err= 0: pid=2435691: Wed Jul 10 23:25:02 2024 00:20:54.009 read: IOPS=2559, BW=10.00MiB/s (10.5MB/s)(10.1MiB/1012msec) 00:20:54.009 slat (nsec): min=1253, max=25850k, avg=165072.49, stdev=1266380.05 00:20:54.009 clat (usec): min=5943, max=94406, avg=18403.57, stdev=11514.53 00:20:54.009 lat (usec): min=5955, max=94413, avg=18568.64, stdev=11656.11 00:20:54.009 clat percentiles (usec): 00:20:54.009 | 1.00th=[ 8979], 5.00th=[11207], 10.00th=[11338], 20.00th=[12256], 00:20:54.009 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13698], 60.00th=[15533], 00:20:54.009 | 70.00th=[17695], 80.00th=[22938], 90.00th=[31327], 95.00th=[35390], 00:20:54.009 | 99.00th=[74974], 99.50th=[91751], 99.90th=[94897], 99.95th=[94897], 00:20:54.009 | 99.99th=[94897] 00:20:54.009 write: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec); 0 zone resets 00:20:54.009 slat (nsec): min=1916, max=11717k, avg=176200.60, stdev=862922.43 00:20:54.009 clat (msec): min=2, max=111, avg=26.18, stdev=19.64 00:20:54.009 lat (msec): min=2, max=111, avg=26.36, stdev=19.75 00:20:54.009 clat percentiles (msec): 00:20:54.009 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 12], 00:20:54.009 | 30.00th=[ 13], 40.00th=[ 18], 50.00th=[ 22], 60.00th=[ 23], 00:20:54.009 | 70.00th=[ 25], 80.00th=[ 41], 90.00th=[ 59], 95.00th=[ 64], 00:20:54.009 | 99.00th=[ 97], 99.50th=[ 100], 99.90th=[ 112], 99.95th=[ 112], 00:20:54.009 | 99.99th=[ 112] 00:20:54.009 bw ( KiB/s): min=11464, max=12336, per=17.12%, avg=11900.00, stdev=616.60, samples=2 00:20:54.009 iops : min= 2866, max= 3084, avg=2975.00, stdev=154.15, samples=2 00:20:54.009 lat (msec) : 4=0.11%, 10=6.71%, 20=53.18%, 50=32.09%, 100=7.65% 00:20:54.009 lat (msec) : 250=0.26% 00:20:54.009 cpu : usr=2.57%, sys=3.07%, ctx=324, majf=0, minf=1 00:20:54.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:20:54.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:54.009 issued rwts: total=2590,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.009 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:54.009 job1: (groupid=0, jobs=1): err= 0: pid=2435708: Wed Jul 10 23:25:02 2024 00:20:54.009 read: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec) 00:20:54.009 slat (nsec): min=1016, max=10476k, avg=79856.99, stdev=631155.81 00:20:54.009 clat (usec): min=2447, max=25874, avg=11416.78, stdev=3409.16 00:20:54.010 lat (usec): min=2457, max=25881, avg=11496.64, stdev=3444.70 00:20:54.010 clat percentiles (usec): 00:20:54.010 | 1.00th=[ 2671], 5.00th=[ 5997], 10.00th=[ 8291], 20.00th=[ 9765], 00:20:54.010 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[11207], 00:20:54.010 | 70.00th=[12256], 80.00th=[13304], 90.00th=[15533], 95.00th=[17957], 00:20:54.010 | 99.00th=[23200], 99.50th=[25297], 99.90th=[25822], 99.95th=[25822], 00:20:54.010 | 99.99th=[25822] 00:20:54.010 write: IOPS=5750, BW=22.5MiB/s (23.6MB/s)(22.7MiB/1011msec); 0 zone resets 00:20:54.010 slat (nsec): min=1791, max=13325k, avg=75528.61, stdev=497433.25 00:20:54.010 clat (usec): min=668, max=55835, avg=10917.08, stdev=6589.32 00:20:54.010 lat (usec): min=734, max=55847, avg=10992.60, stdev=6615.84 00:20:54.010 clat percentiles (usec): 00:20:54.010 | 1.00th=[ 2114], 5.00th=[ 4555], 10.00th=[ 5735], 20.00th=[ 7308], 00:20:54.010 | 30.00th=[ 9241], 40.00th=[10159], 50.00th=[10683], 60.00th=[10945], 00:20:54.010 | 70.00th=[11076], 80.00th=[12649], 90.00th=[14222], 95.00th=[15533], 00:20:54.010 | 99.00th=[51119], 99.50th=[54264], 99.90th=[55837], 99.95th=[55837], 00:20:54.010 | 99.99th=[55837] 00:20:54.010 bw ( KiB/s): min=22320, max=23176, per=32.72%, avg=22748.00, stdev=605.28, samples=2 00:20:54.010 iops : min= 5580, max= 5794, avg=5687.00, stdev=151.32, samples=2 00:20:54.010 lat (usec) : 750=0.02% 00:20:54.010 lat (msec) : 2=0.44%, 4=2.11%, 10=28.26%, 20=66.50%, 50=2.05% 00:20:54.010 lat (msec) : 100=0.62% 00:20:54.010 cpu : usr=4.06%, sys=6.63%, ctx=550, majf=0, minf=1 00:20:54.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:54.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:54.010 issued rwts: total=5632,5814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:54.010 job2: (groupid=0, jobs=1): err= 0: pid=2435725: Wed Jul 10 23:25:02 2024 00:20:54.010 read: IOPS=3478, BW=13.6MiB/s (14.2MB/s)(13.7MiB/1005msec) 00:20:54.010 slat (nsec): min=1789, max=15036k, avg=131350.87, stdev=763653.09 00:20:54.010 clat (usec): min=3644, max=77439, avg=15633.47, stdev=6719.99 00:20:54.010 lat (usec): min=3748, max=77447, avg=15764.82, stdev=6824.34 00:20:54.010 clat percentiles (usec): 00:20:54.010 | 1.00th=[ 8225], 5.00th=[10683], 10.00th=[11469], 20.00th=[11863], 00:20:54.010 | 30.00th=[12256], 40.00th=[13042], 50.00th=[14353], 60.00th=[14877], 00:20:54.010 | 70.00th=[15270], 80.00th=[17433], 90.00th=[22938], 95.00th=[26084], 00:20:54.010 | 99.00th=[47973], 99.50th=[60031], 99.90th=[77071], 99.95th=[77071], 00:20:54.010 | 99.99th=[77071] 00:20:54.010 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:20:54.010 slat (usec): min=2, max=15032, avg=137.74, stdev=765.69 00:20:54.010 clat (msec): min=6, max=116, avg=20.20, stdev=18.40 00:20:54.010 lat (msec): min=6, max=116, avg=20.34, stdev=18.50 00:20:54.010 clat percentiles (msec): 00:20:54.010 | 1.00th=[ 8], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 12], 00:20:54.010 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 14], 00:20:54.010 | 70.00th=[ 19], 80.00th=[ 23], 90.00th=[ 36], 95.00th=[ 59], 00:20:54.010 | 99.00th=[ 105], 99.50th=[ 106], 99.90th=[ 116], 99.95th=[ 116], 00:20:54.010 | 99.99th=[ 116] 00:20:54.010 bw ( KiB/s): min=10440, max=18232, per=20.62%, avg=14336.00, stdev=5509.78, samples=2 00:20:54.010 iops : min= 2610, max= 4558, avg=3584.00, stdev=1377.44, samples=2 00:20:54.010 lat (msec) : 4=0.03%, 10=2.54%, 20=76.27%, 50=16.89%, 100=3.29% 00:20:54.010 lat (msec) : 250=0.97% 00:20:54.010 cpu : usr=2.99%, sys=6.67%, ctx=300, majf=0, minf=1 00:20:54.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:20:54.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:54.010 issued rwts: total=3496,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:54.010 job3: (groupid=0, jobs=1): err= 0: pid=2435726: Wed Jul 10 23:25:02 2024 00:20:54.010 read: IOPS=4683, BW=18.3MiB/s (19.2MB/s)(18.5MiB/1009msec) 00:20:54.010 slat (nsec): min=1269, max=13593k, avg=107782.56, stdev=816457.56 00:20:54.010 clat (usec): min=4601, max=32375, avg=13531.34, stdev=3854.07 00:20:54.010 lat (usec): min=4607, max=32384, avg=13639.12, stdev=3921.39 00:20:54.010 clat percentiles (usec): 00:20:54.010 | 1.00th=[ 7373], 5.00th=[ 8586], 10.00th=[10552], 20.00th=[11338], 00:20:54.010 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12256], 60.00th=[12911], 00:20:54.010 | 70.00th=[13960], 80.00th=[16319], 90.00th=[18220], 95.00th=[21103], 00:20:54.010 | 99.00th=[27919], 99.50th=[28443], 99.90th=[32375], 99.95th=[32375], 00:20:54.010 | 99.99th=[32375] 00:20:54.010 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:20:54.010 slat (usec): min=2, max=10062, avg=89.01, stdev=604.03 00:20:54.010 clat (usec): min=1567, max=39518, avg=12502.48, stdev=5349.18 00:20:54.010 lat (usec): min=1580, max=39532, avg=12591.50, stdev=5400.02 00:20:54.010 clat percentiles (usec): 00:20:54.010 | 1.00th=[ 3884], 5.00th=[ 6915], 10.00th=[ 8455], 20.00th=[10028], 00:20:54.010 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:20:54.010 | 70.00th=[12125], 80.00th=[12387], 90.00th=[19792], 95.00th=[22938], 00:20:54.010 | 99.00th=[36439], 99.50th=[36963], 99.90th=[39584], 99.95th=[39584], 00:20:54.010 | 99.99th=[39584] 00:20:54.010 bw ( KiB/s): min=19504, max=21384, per=29.40%, avg=20444.00, stdev=1329.36, samples=2 00:20:54.010 iops : min= 4876, max= 5346, avg=5111.00, stdev=332.34, samples=2 00:20:54.010 lat (msec) : 2=0.07%, 4=0.54%, 10=14.12%, 20=77.18%, 50=8.09% 00:20:54.010 cpu : usr=4.07%, sys=5.95%, ctx=423, majf=0, minf=1 00:20:54.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:20:54.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:54.010 issued rwts: total=4726,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:54.010 00:20:54.010 Run status group 0 (all jobs): 00:20:54.010 READ: bw=63.5MiB/s (66.6MB/s), 10.00MiB/s-21.8MiB/s (10.5MB/s-22.8MB/s), io=64.2MiB (67.4MB), run=1005-1012msec 00:20:54.010 WRITE: bw=67.9MiB/s (71.2MB/s), 11.9MiB/s-22.5MiB/s (12.4MB/s-23.6MB/s), io=68.7MiB (72.0MB), run=1005-1012msec 00:20:54.010 00:20:54.010 Disk stats (read/write): 00:20:54.010 nvme0n1: ios=2579/2647, merge=0/0, ticks=36892/40945, in_queue=77837, util=97.60% 00:20:54.010 nvme0n2: ios=4644/4911, merge=0/0, ticks=48900/47641, in_queue=96541, util=97.97% 00:20:54.010 nvme0n3: ios=2677/3072, merge=0/0, ticks=21818/28748, in_queue=50566, util=88.94% 00:20:54.010 nvme0n4: ios=3962/4096, merge=0/0, ticks=50869/49658, in_queue=100527, util=89.68% 00:20:54.010 23:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:20:54.010 23:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2435824 00:20:54.010 23:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:54.010 23:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:20:54.010 [global] 00:20:54.010 thread=1 00:20:54.010 invalidate=1 00:20:54.010 rw=read 00:20:54.010 time_based=1 00:20:54.010 runtime=10 00:20:54.010 ioengine=libaio 00:20:54.010 direct=1 00:20:54.010 bs=4096 00:20:54.010 iodepth=1 00:20:54.010 norandommap=1 00:20:54.010 numjobs=1 00:20:54.010 00:20:54.010 [job0] 00:20:54.010 filename=/dev/nvme0n1 00:20:54.010 [job1] 00:20:54.010 filename=/dev/nvme0n2 00:20:54.010 [job2] 00:20:54.010 filename=/dev/nvme0n3 00:20:54.010 [job3] 00:20:54.010 filename=/dev/nvme0n4 00:20:54.010 Could not set queue depth (nvme0n1) 00:20:54.010 Could not set queue depth (nvme0n2) 00:20:54.010 Could not set queue depth (nvme0n3) 00:20:54.010 Could not set queue depth (nvme0n4) 00:20:54.268 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:54.268 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:54.268 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:54.268 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:54.268 fio-3.35 00:20:54.268 Starting 4 threads 00:20:56.793 23:25:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:57.051 23:25:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:57.051 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=278528, buflen=4096 00:20:57.051 fio: pid=2436101, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:57.309 23:25:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:57.309 23:25:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:57.309 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=286720, buflen=4096 00:20:57.309 fio: pid=2436100, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:57.568 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=1540096, buflen=4096 00:20:57.568 fio: pid=2436092, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:57.568 23:25:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:57.568 23:25:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:57.826 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=9695232, buflen=4096 00:20:57.826 fio: pid=2436093, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:57.826 00:20:57.826 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2436092: Wed Jul 10 23:25:06 2024 00:20:57.826 read: IOPS=121, BW=483KiB/s (495kB/s)(1504KiB/3114msec) 00:20:57.826 slat (nsec): min=3072, max=74690, avg=10390.11, stdev=7937.68 00:20:57.826 clat (usec): min=276, max=92757, avg=8215.17, stdev=16469.06 00:20:57.827 lat (usec): min=283, max=92779, avg=8225.53, stdev=16474.62 00:20:57.827 clat percentiles (usec): 00:20:57.827 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 306], 00:20:57.827 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 343], 00:20:57.827 | 70.00th=[ 375], 80.00th=[ 668], 90.00th=[41157], 95.00th=[41157], 00:20:57.827 | 99.00th=[42206], 99.50th=[42206], 99.90th=[92799], 99.95th=[92799], 00:20:57.827 | 99.99th=[92799] 00:20:57.827 bw ( KiB/s): min= 96, max= 1426, per=13.47%, avg=471.00, stdev=536.92, samples=6 00:20:57.827 iops : min= 24, max= 356, avg=117.67, stdev=134.05, samples=6 00:20:57.827 lat (usec) : 500=77.45%, 750=2.92%, 1000=0.27% 00:20:57.827 lat (msec) : 20=0.27%, 50=18.57%, 100=0.27% 00:20:57.827 cpu : usr=0.10%, sys=0.10%, ctx=380, majf=0, minf=1 00:20:57.827 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:57.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.827 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.827 issued rwts: total=377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.827 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:57.827 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2436093: Wed Jul 10 23:25:06 2024 00:20:57.827 read: IOPS=718, BW=2873KiB/s (2942kB/s)(9468KiB/3296msec) 00:20:57.827 slat (usec): min=5, max=30347, avg=42.43, stdev=840.60 00:20:57.827 clat (usec): min=204, max=41982, avg=1344.86, stdev=6470.91 00:20:57.827 lat (usec): min=211, max=42003, avg=1387.31, stdev=6523.07 00:20:57.827 clat percentiles (usec): 00:20:57.827 | 1.00th=[ 212], 5.00th=[ 241], 10.00th=[ 253], 20.00th=[ 265], 00:20:57.827 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:20:57.827 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 371], 00:20:57.827 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:20:57.827 | 99.99th=[42206] 00:20:57.827 bw ( KiB/s): min= 96, max=11921, per=59.55%, avg=2082.83, stdev=4819.82, samples=6 00:20:57.827 iops : min= 24, max= 2980, avg=520.67, stdev=1204.85, samples=6 00:20:57.827 lat (usec) : 250=8.87%, 500=87.84%, 750=0.55% 00:20:57.827 lat (msec) : 2=0.04%, 10=0.04%, 50=2.62% 00:20:57.827 cpu : usr=0.21%, sys=0.67%, ctx=2376, majf=0, minf=1 00:20:57.827 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:57.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.827 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.827 issued rwts: total=2368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.827 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:57.827 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2436100: Wed Jul 10 23:25:06 2024 00:20:57.827 read: IOPS=24, BW=97.7KiB/s (100kB/s)(280KiB/2865msec) 00:20:57.827 slat (usec): min=11, max=11639, avg=185.75, stdev=1378.76 00:20:57.827 clat (usec): min=445, max=42324, avg=40440.39, stdev=4855.76 00:20:57.827 lat (usec): min=482, max=53964, avg=40628.61, stdev=5111.13 00:20:57.827 clat percentiles (usec): 00:20:57.827 | 1.00th=[ 445], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:20:57.827 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:57.827 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:57.827 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:57.827 | 99.99th=[42206] 00:20:57.827 bw ( KiB/s): min= 96, max= 104, per=2.83%, avg=99.20, stdev= 4.38, samples=5 00:20:57.827 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:20:57.827 lat (usec) : 500=1.41% 00:20:57.827 lat (msec) : 50=97.18% 00:20:57.827 cpu : usr=0.14%, sys=0.00%, ctx=73, majf=0, minf=1 00:20:57.827 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:57.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.827 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.827 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.827 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:57.827 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2436101: Wed Jul 10 23:25:06 2024 00:20:57.827 read: IOPS=25, BW=101KiB/s (103kB/s)(272KiB/2692msec) 00:20:57.827 slat (nsec): min=14569, max=41216, avg=22362.48, stdev=2821.09 00:20:57.827 clat (usec): min=632, max=42040, avg=39261.04, stdev=8308.73 00:20:57.827 lat (usec): min=656, max=42055, avg=39283.38, stdev=8308.57 00:20:57.827 clat percentiles (usec): 00:20:57.827 | 1.00th=[ 635], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:20:57.827 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:57.827 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:20:57.827 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:57.827 | 99.99th=[42206] 00:20:57.827 bw ( KiB/s): min= 96, max= 112, per=2.86%, avg=100.80, stdev= 7.16, samples=5 00:20:57.827 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:20:57.827 lat (usec) : 750=2.90% 00:20:57.827 lat (msec) : 2=1.45%, 50=94.20% 00:20:57.827 cpu : usr=0.11%, sys=0.00%, ctx=69, majf=0, minf=2 00:20:57.827 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:57.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.827 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.827 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.827 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:57.827 00:20:57.827 Run status group 0 (all jobs): 00:20:57.827 READ: bw=3496KiB/s (3580kB/s), 97.7KiB/s-2873KiB/s (100kB/s-2942kB/s), io=11.3MiB (11.8MB), run=2692-3296msec 00:20:57.827 00:20:57.827 Disk stats (read/write): 00:20:57.827 nvme0n1: ios=393/0, merge=0/0, ticks=3042/0, in_queue=3042, util=94.88% 00:20:57.827 nvme0n2: ios=1744/0, merge=0/0, ticks=2996/0, in_queue=2996, util=93.82% 00:20:57.827 nvme0n3: ios=107/0, merge=0/0, ticks=2952/0, in_queue=2952, util=99.56% 00:20:57.827 nvme0n4: ios=65/0, merge=0/0, ticks=2547/0, in_queue=2547, util=96.39% 00:20:57.827 23:25:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:57.827 23:25:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:58.086 23:25:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:58.086 23:25:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:58.345 23:25:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:58.345 23:25:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:58.603 23:25:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:58.603 23:25:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:58.861 23:25:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:58.861 23:25:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:59.120 23:25:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:20:59.120 23:25:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2435824 00:20:59.120 23:25:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:20:59.120 23:25:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:00.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:00.056 23:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:00.056 23:25:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:21:00.056 23:25:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:00.056 23:25:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:00.056 23:25:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:00.056 23:25:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:00.056 23:25:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:21:00.056 23:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:21:00.056 23:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:21:00.056 nvmf hotplug test: fio failed as expected 00:21:00.056 23:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:00.314 rmmod nvme_tcp 00:21:00.314 rmmod nvme_fabrics 00:21:00.314 rmmod nvme_keyring 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2433015 ']' 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2433015 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2433015 ']' 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2433015 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:00.314 23:25:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2433015 00:21:00.572 23:25:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:00.572 23:25:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:00.572 23:25:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2433015' 00:21:00.572 killing process with pid 2433015 00:21:00.572 23:25:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2433015 00:21:00.572 23:25:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2433015 00:21:01.948 23:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:01.948 23:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:01.948 23:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:01.948 23:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:01.948 23:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:01.948 23:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.948 23:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.948 23:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.852 23:25:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:03.852 00:21:03.852 real 0m29.167s 00:21:03.852 user 1m56.428s 00:21:03.852 sys 0m7.323s 00:21:03.852 23:25:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:03.852 23:25:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.852 ************************************ 00:21:03.852 END TEST nvmf_fio_target 00:21:03.852 ************************************ 00:21:03.852 23:25:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:03.852 23:25:12 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:03.852 23:25:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:03.852 23:25:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:03.852 23:25:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:03.852 ************************************ 00:21:03.852 START TEST nvmf_bdevio 00:21:03.852 ************************************ 00:21:03.852 23:25:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:04.111 * Looking for test storage... 00:21:04.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:04.111 23:25:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.111 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:21:04.111 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.111 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.111 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.111 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.111 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.111 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.111 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.111 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.111 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.111 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.111 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.111 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.112 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.112 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.112 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.112 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.112 23:25:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:21:04.112 23:25:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:09.382 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:09.382 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:21:09.382 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:09.382 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:09.382 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:09.382 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:09.382 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:09.382 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:21:09.382 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:09.382 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:21:09.382 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:21:09.382 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:09.383 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:09.383 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:09.383 Found net devices under 0000:86:00.0: cvl_0_0 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:09.383 Found net devices under 0000:86:00.1: cvl_0_1 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:09.383 23:25:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:09.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:21:09.383 00:21:09.383 --- 10.0.0.2 ping statistics --- 00:21:09.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.383 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:09.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:21:09.383 00:21:09.383 --- 10.0.0.1 ping statistics --- 00:21:09.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.383 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2440561 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2440561 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2440561 ']' 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:09.383 23:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:09.383 [2024-07-10 23:25:18.321869] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:21:09.383 [2024-07-10 23:25:18.321962] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.383 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.383 [2024-07-10 23:25:18.431871] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:09.641 [2024-07-10 23:25:18.646414] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.641 [2024-07-10 23:25:18.646459] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.641 [2024-07-10 23:25:18.646470] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.641 [2024-07-10 23:25:18.646479] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.641 [2024-07-10 23:25:18.646487] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.641 [2024-07-10 23:25:18.646646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:09.641 [2024-07-10 23:25:18.646730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:09.641 [2024-07-10 23:25:18.646806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:09.641 [2024-07-10 23:25:18.646830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:10.206 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:10.206 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:21:10.206 23:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:10.206 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:10.206 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:10.206 23:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.206 23:25:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:10.206 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.207 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:10.207 [2024-07-10 23:25:19.164447] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.207 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.207 23:25:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:10.207 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.207 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:10.207 Malloc0 00:21:10.207 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.207 23:25:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:10.207 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.207 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:10.207 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.207 23:25:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:10.207 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.207 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:10.464 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.464 23:25:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:10.464 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.464 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:10.464 [2024-07-10 23:25:19.282517] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.464 23:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.464 23:25:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:21:10.464 23:25:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:10.464 23:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:21:10.464 23:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:21:10.464 23:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:10.464 23:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:10.464 { 00:21:10.464 "params": { 00:21:10.464 "name": "Nvme$subsystem", 00:21:10.464 "trtype": "$TEST_TRANSPORT", 00:21:10.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.464 "adrfam": "ipv4", 00:21:10.464 "trsvcid": "$NVMF_PORT", 00:21:10.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.464 "hdgst": ${hdgst:-false}, 00:21:10.464 "ddgst": ${ddgst:-false} 00:21:10.464 }, 00:21:10.464 "method": "bdev_nvme_attach_controller" 00:21:10.464 } 00:21:10.464 EOF 00:21:10.464 )") 00:21:10.464 23:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:21:10.464 23:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:21:10.464 23:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:21:10.464 23:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:10.464 "params": { 00:21:10.464 "name": "Nvme1", 00:21:10.464 "trtype": "tcp", 00:21:10.464 "traddr": "10.0.0.2", 00:21:10.464 "adrfam": "ipv4", 00:21:10.464 "trsvcid": "4420", 00:21:10.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:10.464 "hdgst": false, 00:21:10.464 "ddgst": false 00:21:10.464 }, 00:21:10.464 "method": "bdev_nvme_attach_controller" 00:21:10.464 }' 00:21:10.464 [2024-07-10 23:25:19.358659] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:21:10.464 [2024-07-10 23:25:19.358745] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2440740 ] 00:21:10.464 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.464 [2024-07-10 23:25:19.464132] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:10.722 [2024-07-10 23:25:19.702496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.722 [2024-07-10 23:25:19.702563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.722 [2024-07-10 23:25:19.702571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.355 I/O targets: 00:21:11.355 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:11.355 00:21:11.355 00:21:11.355 CUnit - A unit testing framework for C - Version 2.1-3 00:21:11.355 http://cunit.sourceforge.net/ 00:21:11.355 00:21:11.355 00:21:11.355 Suite: bdevio tests on: Nvme1n1 00:21:11.355 Test: blockdev write read block ...passed 00:21:11.355 Test: blockdev write zeroes read block ...passed 00:21:11.355 Test: blockdev write zeroes read no split ...passed 00:21:11.355 Test: blockdev write zeroes read split ...passed 00:21:11.612 Test: blockdev write zeroes read split partial ...passed 00:21:11.612 Test: blockdev reset ...[2024-07-10 23:25:20.430039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:11.612 [2024-07-10 23:25:20.430144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:21:11.612 [2024-07-10 23:25:20.486917] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:11.612 passed 00:21:11.612 Test: blockdev write read 8 blocks ...passed 00:21:11.612 Test: blockdev write read size > 128k ...passed 00:21:11.612 Test: blockdev write read invalid size ...passed 00:21:11.612 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:11.612 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:11.612 Test: blockdev write read max offset ...passed 00:21:11.612 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:11.613 Test: blockdev writev readv 8 blocks ...passed 00:21:11.613 Test: blockdev writev readv 30 x 1block ...passed 00:21:11.613 Test: blockdev writev readv block ...passed 00:21:11.613 Test: blockdev writev readv size > 128k ...passed 00:21:11.613 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:11.613 Test: blockdev comparev and writev ...[2024-07-10 23:25:20.661091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.613 [2024-07-10 23:25:20.661137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.613 [2024-07-10 23:25:20.661156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.613 [2024-07-10 23:25:20.661175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:11.613 [2024-07-10 23:25:20.661497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.613 [2024-07-10 23:25:20.661513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:11.613 [2024-07-10 23:25:20.661529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.613 [2024-07-10 23:25:20.661539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:11.613 [2024-07-10 23:25:20.661853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.613 [2024-07-10 23:25:20.661869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:11.613 [2024-07-10 23:25:20.661886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.613 [2024-07-10 23:25:20.661896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:11.613 [2024-07-10 23:25:20.662215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.613 [2024-07-10 23:25:20.662231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:11.613 [2024-07-10 23:25:20.662247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.613 [2024-07-10 23:25:20.662258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:11.869 passed 00:21:11.869 Test: blockdev nvme passthru rw ...passed 00:21:11.869 Test: blockdev nvme passthru vendor specific ...[2024-07-10 23:25:20.744614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:11.869 [2024-07-10 23:25:20.744650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:11.869 [2024-07-10 23:25:20.744811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:11.869 [2024-07-10 23:25:20.744825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:11.869 [2024-07-10 23:25:20.744976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:11.869 [2024-07-10 23:25:20.744994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:11.869 [2024-07-10 23:25:20.745141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:11.869 [2024-07-10 23:25:20.745154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:11.869 passed 00:21:11.869 Test: blockdev nvme admin passthru ...passed 00:21:11.869 Test: blockdev copy ...passed 00:21:11.869 00:21:11.869 Run Summary: Type Total Ran Passed Failed Inactive 00:21:11.869 suites 1 1 n/a 0 0 00:21:11.869 tests 23 23 23 0 0 00:21:11.869 asserts 152 152 152 0 n/a 00:21:11.869 00:21:11.869 Elapsed time = 1.352 seconds 00:21:12.799 23:25:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:12.799 23:25:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.799 23:25:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:13.057 23:25:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.057 23:25:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:13.057 23:25:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:21:13.057 23:25:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:13.057 23:25:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:21:13.057 23:25:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:13.057 23:25:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:21:13.057 23:25:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:13.057 23:25:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:13.057 rmmod nvme_tcp 00:21:13.057 rmmod nvme_fabrics 00:21:13.057 rmmod nvme_keyring 00:21:13.057 23:25:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:13.057 23:25:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:21:13.057 23:25:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:21:13.057 23:25:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2440561 ']' 00:21:13.057 23:25:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2440561 00:21:13.057 23:25:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2440561 ']' 00:21:13.057 23:25:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2440561 00:21:13.057 23:25:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:21:13.058 23:25:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:13.058 23:25:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2440561 00:21:13.058 23:25:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:21:13.058 23:25:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:21:13.058 23:25:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2440561' 00:21:13.058 killing process with pid 2440561 00:21:13.058 23:25:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2440561 00:21:13.058 23:25:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2440561 00:21:14.952 23:25:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:14.952 23:25:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:14.952 23:25:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:14.952 23:25:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.952 23:25:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:14.952 23:25:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.952 23:25:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.952 23:25:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.848 23:25:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:16.848 00:21:16.848 real 0m12.686s 00:21:16.848 user 0m24.234s 00:21:16.848 sys 0m4.680s 00:21:16.848 23:25:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:16.848 23:25:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:16.848 ************************************ 00:21:16.848 END TEST nvmf_bdevio 00:21:16.848 ************************************ 00:21:16.848 23:25:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:16.848 23:25:25 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:16.848 23:25:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:16.848 23:25:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:16.848 23:25:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:16.848 ************************************ 00:21:16.848 START TEST nvmf_auth_target 00:21:16.848 ************************************ 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:16.848 * Looking for test storage... 00:21:16.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:21:16.848 23:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.104 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:22.105 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:22.105 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:22.105 Found net devices under 0000:86:00.0: cvl_0_0 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:22.105 Found net devices under 0000:86:00.1: cvl_0_1 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:22.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:21:22.105 00:21:22.105 --- 10.0.0.2 ping statistics --- 00:21:22.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.105 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:21:22.105 00:21:22.105 --- 10.0.0.1 ping statistics --- 00:21:22.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.105 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2444802 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2444802 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2444802 ']' 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.105 23:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.038 23:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.038 23:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:23.038 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.038 23:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2444931 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5dce21791471d4e2f41a2380edd5955c421137dc73f63865 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.uXn 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5dce21791471d4e2f41a2380edd5955c421137dc73f63865 0 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5dce21791471d4e2f41a2380edd5955c421137dc73f63865 0 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5dce21791471d4e2f41a2380edd5955c421137dc73f63865 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.uXn 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.uXn 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.uXn 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f08384ee76e782a414982c86cb88fbe9a427c56063db4a29ac9b40377c2a2380 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.8pe 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f08384ee76e782a414982c86cb88fbe9a427c56063db4a29ac9b40377c2a2380 3 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f08384ee76e782a414982c86cb88fbe9a427c56063db4a29ac9b40377c2a2380 3 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f08384ee76e782a414982c86cb88fbe9a427c56063db4a29ac9b40377c2a2380 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.8pe 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.8pe 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.8pe 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2d005e3b064b671c11ea3b124d0709d2 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.IHN 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2d005e3b064b671c11ea3b124d0709d2 1 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2d005e3b064b671c11ea3b124d0709d2 1 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2d005e3b064b671c11ea3b124d0709d2 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:21:23.039 23:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.IHN 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.IHN 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.IHN 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8336c12fe897fcde98be85c6f4428c85a38bff09740016d2 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.plZ 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8336c12fe897fcde98be85c6f4428c85a38bff09740016d2 2 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8336c12fe897fcde98be85c6f4428c85a38bff09740016d2 2 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8336c12fe897fcde98be85c6f4428c85a38bff09740016d2 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.plZ 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.plZ 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.plZ 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d35952fbc30718554fae4ca922407d800fc92e546e034f31 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:21:23.039 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.LnY 00:21:23.040 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d35952fbc30718554fae4ca922407d800fc92e546e034f31 2 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d35952fbc30718554fae4ca922407d800fc92e546e034f31 2 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d35952fbc30718554fae4ca922407d800fc92e546e034f31 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.LnY 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.LnY 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.LnY 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=25ca14ee8ba79a70ea3904a3efe90c41 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.e0X 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 25ca14ee8ba79a70ea3904a3efe90c41 1 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 25ca14ee8ba79a70ea3904a3efe90c41 1 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=25ca14ee8ba79a70ea3904a3efe90c41 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.e0X 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.e0X 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.e0X 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=75f54b0305c86ac90223b8bc4020a205f845702637382ceb5ef8246511a4da3c 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.F1K 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 75f54b0305c86ac90223b8bc4020a205f845702637382ceb5ef8246511a4da3c 3 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 75f54b0305c86ac90223b8bc4020a205f845702637382ceb5ef8246511a4da3c 3 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=75f54b0305c86ac90223b8bc4020a205f845702637382ceb5ef8246511a4da3c 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.F1K 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.F1K 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.F1K 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2444802 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2444802 ']' 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.298 23:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.556 23:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.556 23:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:23.556 23:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2444931 /var/tmp/host.sock 00:21:23.556 23:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2444931 ']' 00:21:23.556 23:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:21:23.556 23:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.556 23:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:23.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:23.556 23:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.556 23:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.122 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.122 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:24.122 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:21:24.122 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.122 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.380 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.380 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:24.380 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uXn 00:21:24.380 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.380 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.380 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.380 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.uXn 00:21:24.380 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.uXn 00:21:24.380 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.8pe ]] 00:21:24.380 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8pe 00:21:24.380 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.380 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.380 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.380 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8pe 00:21:24.380 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8pe 00:21:24.638 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:24.638 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.IHN 00:21:24.638 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.638 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.638 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.638 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.IHN 00:21:24.638 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.IHN 00:21:24.896 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.plZ ]] 00:21:24.896 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.plZ 00:21:24.896 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.896 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.896 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.896 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.plZ 00:21:24.896 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.plZ 00:21:24.896 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:24.896 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.LnY 00:21:24.896 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.896 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.896 23:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.896 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.LnY 00:21:24.896 23:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.LnY 00:21:25.153 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.e0X ]] 00:21:25.153 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.e0X 00:21:25.153 23:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.153 23:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.153 23:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.153 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.e0X 00:21:25.153 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.e0X 00:21:25.411 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:25.411 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.F1K 00:21:25.411 23:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.411 23:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.411 23:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.411 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.F1K 00:21:25.411 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.F1K 00:21:25.411 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:21:25.411 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:25.411 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.411 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.411 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:25.411 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:25.668 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:21:25.668 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.668 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:25.668 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:25.668 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:25.668 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.668 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.668 23:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.668 23:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.668 23:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.668 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.668 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.925 00:21:25.925 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:25.925 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:25.925 23:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.183 23:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.183 23:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.183 23:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.183 23:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.183 23:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.183 23:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.183 { 00:21:26.183 "cntlid": 1, 00:21:26.183 "qid": 0, 00:21:26.183 "state": "enabled", 00:21:26.183 "thread": "nvmf_tgt_poll_group_000", 00:21:26.183 "listen_address": { 00:21:26.183 "trtype": "TCP", 00:21:26.183 "adrfam": "IPv4", 00:21:26.183 "traddr": "10.0.0.2", 00:21:26.183 "trsvcid": "4420" 00:21:26.183 }, 00:21:26.183 "peer_address": { 00:21:26.183 "trtype": "TCP", 00:21:26.183 "adrfam": "IPv4", 00:21:26.183 "traddr": "10.0.0.1", 00:21:26.183 "trsvcid": "49746" 00:21:26.183 }, 00:21:26.183 "auth": { 00:21:26.183 "state": "completed", 00:21:26.183 "digest": "sha256", 00:21:26.183 "dhgroup": "null" 00:21:26.183 } 00:21:26.183 } 00:21:26.183 ]' 00:21:26.183 23:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.183 23:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:26.183 23:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.183 23:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:26.183 23:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.183 23:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.183 23:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.183 23:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.441 23:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:21:27.007 23:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.007 23:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:27.007 23:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.007 23:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.007 23:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.007 23:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:27.007 23:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:27.007 23:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:27.265 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:21:27.265 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:27.265 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:27.265 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:27.265 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:27.265 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.265 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.265 23:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.265 23:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.265 23:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.265 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.265 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.265 00:21:27.524 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.524 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.524 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.524 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.524 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.524 23:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.524 23:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.524 23:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.524 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.524 { 00:21:27.524 "cntlid": 3, 00:21:27.524 "qid": 0, 00:21:27.524 "state": "enabled", 00:21:27.524 "thread": "nvmf_tgt_poll_group_000", 00:21:27.524 "listen_address": { 00:21:27.524 "trtype": "TCP", 00:21:27.524 "adrfam": "IPv4", 00:21:27.524 "traddr": "10.0.0.2", 00:21:27.524 "trsvcid": "4420" 00:21:27.524 }, 00:21:27.524 "peer_address": { 00:21:27.524 "trtype": "TCP", 00:21:27.524 "adrfam": "IPv4", 00:21:27.524 "traddr": "10.0.0.1", 00:21:27.524 "trsvcid": "49762" 00:21:27.524 }, 00:21:27.524 "auth": { 00:21:27.524 "state": "completed", 00:21:27.524 "digest": "sha256", 00:21:27.524 "dhgroup": "null" 00:21:27.524 } 00:21:27.524 } 00:21:27.524 ]' 00:21:27.524 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.524 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:27.524 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.782 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:27.782 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.782 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.782 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.782 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.782 23:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:21:28.347 23:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.604 23:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.862 00:21:28.862 23:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.862 23:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.862 23:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.120 23:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.120 23:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.120 23:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.120 23:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.120 23:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.120 23:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.120 { 00:21:29.120 "cntlid": 5, 00:21:29.120 "qid": 0, 00:21:29.120 "state": "enabled", 00:21:29.120 "thread": "nvmf_tgt_poll_group_000", 00:21:29.120 "listen_address": { 00:21:29.120 "trtype": "TCP", 00:21:29.120 "adrfam": "IPv4", 00:21:29.120 "traddr": "10.0.0.2", 00:21:29.120 "trsvcid": "4420" 00:21:29.120 }, 00:21:29.120 "peer_address": { 00:21:29.120 "trtype": "TCP", 00:21:29.120 "adrfam": "IPv4", 00:21:29.120 "traddr": "10.0.0.1", 00:21:29.120 "trsvcid": "49782" 00:21:29.120 }, 00:21:29.120 "auth": { 00:21:29.120 "state": "completed", 00:21:29.120 "digest": "sha256", 00:21:29.120 "dhgroup": "null" 00:21:29.120 } 00:21:29.120 } 00:21:29.120 ]' 00:21:29.120 23:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.120 23:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:29.120 23:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.120 23:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:29.120 23:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.120 23:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.120 23:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.120 23:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.378 23:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:21:29.944 23:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.944 23:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:29.944 23:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.944 23:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.944 23:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.944 23:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:29.944 23:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:29.944 23:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:30.202 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:21:30.202 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.202 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:30.202 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:30.202 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:30.202 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.202 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:30.202 23:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.202 23:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.202 23:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.202 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.202 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.460 00:21:30.460 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.460 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.460 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.460 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.460 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.460 23:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.460 23:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.460 23:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.460 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:30.460 { 00:21:30.460 "cntlid": 7, 00:21:30.460 "qid": 0, 00:21:30.460 "state": "enabled", 00:21:30.460 "thread": "nvmf_tgt_poll_group_000", 00:21:30.460 "listen_address": { 00:21:30.460 "trtype": "TCP", 00:21:30.460 "adrfam": "IPv4", 00:21:30.460 "traddr": "10.0.0.2", 00:21:30.460 "trsvcid": "4420" 00:21:30.460 }, 00:21:30.460 "peer_address": { 00:21:30.460 "trtype": "TCP", 00:21:30.460 "adrfam": "IPv4", 00:21:30.460 "traddr": "10.0.0.1", 00:21:30.460 "trsvcid": "49800" 00:21:30.460 }, 00:21:30.460 "auth": { 00:21:30.460 "state": "completed", 00:21:30.460 "digest": "sha256", 00:21:30.460 "dhgroup": "null" 00:21:30.460 } 00:21:30.460 } 00:21:30.460 ]' 00:21:30.460 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:30.718 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:30.718 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:30.718 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:30.718 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:30.718 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.718 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.718 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.975 23:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.543 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.835 00:21:31.835 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.835 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.835 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.094 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.094 23:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.094 23:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.094 23:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.094 23:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.094 23:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.094 { 00:21:32.094 "cntlid": 9, 00:21:32.094 "qid": 0, 00:21:32.094 "state": "enabled", 00:21:32.094 "thread": "nvmf_tgt_poll_group_000", 00:21:32.094 "listen_address": { 00:21:32.094 "trtype": "TCP", 00:21:32.094 "adrfam": "IPv4", 00:21:32.094 "traddr": "10.0.0.2", 00:21:32.094 "trsvcid": "4420" 00:21:32.094 }, 00:21:32.094 "peer_address": { 00:21:32.094 "trtype": "TCP", 00:21:32.094 "adrfam": "IPv4", 00:21:32.094 "traddr": "10.0.0.1", 00:21:32.094 "trsvcid": "34804" 00:21:32.094 }, 00:21:32.094 "auth": { 00:21:32.094 "state": "completed", 00:21:32.094 "digest": "sha256", 00:21:32.094 "dhgroup": "ffdhe2048" 00:21:32.094 } 00:21:32.094 } 00:21:32.094 ]' 00:21:32.094 23:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.094 23:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:32.094 23:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.094 23:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:32.094 23:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.094 23:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.094 23:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.094 23:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.352 23:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:21:32.919 23:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.919 23:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:32.919 23:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.919 23:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.919 23:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.919 23:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.919 23:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:32.919 23:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:33.178 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:21:33.178 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.178 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:33.178 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:33.178 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:33.178 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.178 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.178 23:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.178 23:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.178 23:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.178 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.178 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.436 00:21:33.436 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.436 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.436 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.436 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.436 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.436 23:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.436 23:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.694 23:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.694 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.694 { 00:21:33.694 "cntlid": 11, 00:21:33.694 "qid": 0, 00:21:33.694 "state": "enabled", 00:21:33.694 "thread": "nvmf_tgt_poll_group_000", 00:21:33.694 "listen_address": { 00:21:33.694 "trtype": "TCP", 00:21:33.694 "adrfam": "IPv4", 00:21:33.694 "traddr": "10.0.0.2", 00:21:33.694 "trsvcid": "4420" 00:21:33.694 }, 00:21:33.694 "peer_address": { 00:21:33.694 "trtype": "TCP", 00:21:33.694 "adrfam": "IPv4", 00:21:33.694 "traddr": "10.0.0.1", 00:21:33.694 "trsvcid": "34836" 00:21:33.694 }, 00:21:33.694 "auth": { 00:21:33.694 "state": "completed", 00:21:33.694 "digest": "sha256", 00:21:33.694 "dhgroup": "ffdhe2048" 00:21:33.694 } 00:21:33.694 } 00:21:33.694 ]' 00:21:33.694 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.694 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:33.694 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.694 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:33.694 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.694 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.694 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.694 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.952 23:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:21:34.516 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.516 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.516 23:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.516 23:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.516 23:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.516 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.516 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:34.516 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:34.516 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:21:34.516 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:34.516 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:34.516 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:34.516 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:34.516 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.516 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.516 23:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.516 23:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.773 23:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.773 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.773 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.773 00:21:34.773 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.773 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.773 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.030 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.030 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.031 23:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.031 23:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.031 23:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.031 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.031 { 00:21:35.031 "cntlid": 13, 00:21:35.031 "qid": 0, 00:21:35.031 "state": "enabled", 00:21:35.031 "thread": "nvmf_tgt_poll_group_000", 00:21:35.031 "listen_address": { 00:21:35.031 "trtype": "TCP", 00:21:35.031 "adrfam": "IPv4", 00:21:35.031 "traddr": "10.0.0.2", 00:21:35.031 "trsvcid": "4420" 00:21:35.031 }, 00:21:35.031 "peer_address": { 00:21:35.031 "trtype": "TCP", 00:21:35.031 "adrfam": "IPv4", 00:21:35.031 "traddr": "10.0.0.1", 00:21:35.031 "trsvcid": "34872" 00:21:35.031 }, 00:21:35.031 "auth": { 00:21:35.031 "state": "completed", 00:21:35.031 "digest": "sha256", 00:21:35.031 "dhgroup": "ffdhe2048" 00:21:35.031 } 00:21:35.031 } 00:21:35.031 ]' 00:21:35.031 23:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.031 23:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:35.031 23:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.031 23:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:35.031 23:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.288 23:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.288 23:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.288 23:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.288 23:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:21:35.851 23:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.851 23:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:35.851 23:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.851 23:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.851 23:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.851 23:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.851 23:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:35.851 23:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:36.108 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:21:36.108 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.109 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:36.109 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:36.109 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:36.109 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.109 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:36.109 23:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.109 23:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.109 23:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.109 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.109 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.365 00:21:36.365 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.365 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.365 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.622 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.622 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.622 23:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.622 23:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.622 23:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.622 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.622 { 00:21:36.622 "cntlid": 15, 00:21:36.622 "qid": 0, 00:21:36.622 "state": "enabled", 00:21:36.622 "thread": "nvmf_tgt_poll_group_000", 00:21:36.622 "listen_address": { 00:21:36.622 "trtype": "TCP", 00:21:36.622 "adrfam": "IPv4", 00:21:36.622 "traddr": "10.0.0.2", 00:21:36.622 "trsvcid": "4420" 00:21:36.622 }, 00:21:36.622 "peer_address": { 00:21:36.622 "trtype": "TCP", 00:21:36.622 "adrfam": "IPv4", 00:21:36.622 "traddr": "10.0.0.1", 00:21:36.622 "trsvcid": "34900" 00:21:36.622 }, 00:21:36.622 "auth": { 00:21:36.622 "state": "completed", 00:21:36.622 "digest": "sha256", 00:21:36.622 "dhgroup": "ffdhe2048" 00:21:36.622 } 00:21:36.622 } 00:21:36.622 ]' 00:21:36.622 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.622 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:36.622 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.623 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:36.623 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.623 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.623 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.623 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.880 23:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:21:37.444 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.444 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:37.444 23:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.444 23:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.444 23:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.444 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.444 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.444 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:37.444 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:37.701 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:21:37.701 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.701 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:37.701 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:37.701 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:37.701 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.701 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.701 23:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.701 23:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.701 23:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.701 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.701 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.701 00:21:37.958 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.958 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.958 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.958 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.958 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.958 23:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.958 23:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.958 23:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.958 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.958 { 00:21:37.958 "cntlid": 17, 00:21:37.958 "qid": 0, 00:21:37.958 "state": "enabled", 00:21:37.958 "thread": "nvmf_tgt_poll_group_000", 00:21:37.958 "listen_address": { 00:21:37.958 "trtype": "TCP", 00:21:37.958 "adrfam": "IPv4", 00:21:37.958 "traddr": "10.0.0.2", 00:21:37.958 "trsvcid": "4420" 00:21:37.958 }, 00:21:37.958 "peer_address": { 00:21:37.958 "trtype": "TCP", 00:21:37.958 "adrfam": "IPv4", 00:21:37.958 "traddr": "10.0.0.1", 00:21:37.958 "trsvcid": "34930" 00:21:37.958 }, 00:21:37.958 "auth": { 00:21:37.958 "state": "completed", 00:21:37.958 "digest": "sha256", 00:21:37.958 "dhgroup": "ffdhe3072" 00:21:37.958 } 00:21:37.958 } 00:21:37.958 ]' 00:21:37.958 23:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.958 23:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:37.958 23:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.215 23:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:38.215 23:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.215 23:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.215 23:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.215 23:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.215 23:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:21:38.780 23:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.037 23:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:39.037 23:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.037 23:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.037 23:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.037 23:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.038 23:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:39.038 23:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:39.038 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:21:39.038 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.038 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:39.038 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:39.038 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:39.038 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.038 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.038 23:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.038 23:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.038 23:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.038 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.038 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.295 00:21:39.295 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.295 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.295 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.553 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.553 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.553 23:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.553 23:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.553 23:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.553 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.553 { 00:21:39.553 "cntlid": 19, 00:21:39.553 "qid": 0, 00:21:39.553 "state": "enabled", 00:21:39.553 "thread": "nvmf_tgt_poll_group_000", 00:21:39.553 "listen_address": { 00:21:39.553 "trtype": "TCP", 00:21:39.553 "adrfam": "IPv4", 00:21:39.553 "traddr": "10.0.0.2", 00:21:39.553 "trsvcid": "4420" 00:21:39.553 }, 00:21:39.553 "peer_address": { 00:21:39.553 "trtype": "TCP", 00:21:39.553 "adrfam": "IPv4", 00:21:39.553 "traddr": "10.0.0.1", 00:21:39.553 "trsvcid": "34952" 00:21:39.553 }, 00:21:39.553 "auth": { 00:21:39.553 "state": "completed", 00:21:39.553 "digest": "sha256", 00:21:39.553 "dhgroup": "ffdhe3072" 00:21:39.553 } 00:21:39.553 } 00:21:39.553 ]' 00:21:39.553 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.553 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:39.553 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.553 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:39.553 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.553 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.553 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.553 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.810 23:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:21:40.373 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.374 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.374 23:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.374 23:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.374 23:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.374 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.374 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:40.374 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:40.630 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:21:40.630 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.630 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:40.630 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:40.630 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:40.630 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.631 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.631 23:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.631 23:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.631 23:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.631 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.631 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.888 00:21:40.888 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.888 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.888 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.145 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.145 23:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.145 23:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.145 23:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.145 23:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.145 23:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.145 { 00:21:41.145 "cntlid": 21, 00:21:41.145 "qid": 0, 00:21:41.145 "state": "enabled", 00:21:41.145 "thread": "nvmf_tgt_poll_group_000", 00:21:41.145 "listen_address": { 00:21:41.145 "trtype": "TCP", 00:21:41.145 "adrfam": "IPv4", 00:21:41.145 "traddr": "10.0.0.2", 00:21:41.145 "trsvcid": "4420" 00:21:41.145 }, 00:21:41.145 "peer_address": { 00:21:41.145 "trtype": "TCP", 00:21:41.145 "adrfam": "IPv4", 00:21:41.145 "traddr": "10.0.0.1", 00:21:41.145 "trsvcid": "34970" 00:21:41.145 }, 00:21:41.145 "auth": { 00:21:41.145 "state": "completed", 00:21:41.145 "digest": "sha256", 00:21:41.145 "dhgroup": "ffdhe3072" 00:21:41.145 } 00:21:41.145 } 00:21:41.145 ]' 00:21:41.146 23:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.146 23:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:41.146 23:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.146 23:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:41.146 23:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.146 23:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.146 23:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.146 23:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.404 23:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:21:41.969 23:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.969 23:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:41.969 23:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.969 23:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.969 23:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.969 23:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.969 23:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:41.969 23:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:42.227 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:21:42.227 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.227 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:42.227 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:42.227 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:42.227 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.227 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:42.227 23:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.227 23:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.227 23:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.227 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.227 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.484 00:21:42.484 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.485 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.485 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.485 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.485 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.485 23:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.485 23:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.485 23:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.485 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.485 { 00:21:42.485 "cntlid": 23, 00:21:42.485 "qid": 0, 00:21:42.485 "state": "enabled", 00:21:42.485 "thread": "nvmf_tgt_poll_group_000", 00:21:42.485 "listen_address": { 00:21:42.485 "trtype": "TCP", 00:21:42.485 "adrfam": "IPv4", 00:21:42.485 "traddr": "10.0.0.2", 00:21:42.485 "trsvcid": "4420" 00:21:42.485 }, 00:21:42.485 "peer_address": { 00:21:42.485 "trtype": "TCP", 00:21:42.485 "adrfam": "IPv4", 00:21:42.485 "traddr": "10.0.0.1", 00:21:42.485 "trsvcid": "60024" 00:21:42.485 }, 00:21:42.485 "auth": { 00:21:42.485 "state": "completed", 00:21:42.485 "digest": "sha256", 00:21:42.485 "dhgroup": "ffdhe3072" 00:21:42.485 } 00:21:42.485 } 00:21:42.485 ]' 00:21:42.485 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.742 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:42.742 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.742 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:42.742 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.742 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.742 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.742 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.999 23:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.562 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.819 00:21:43.819 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.819 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.819 23:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.076 23:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.076 23:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.076 23:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.076 23:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.076 23:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.076 23:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.076 { 00:21:44.076 "cntlid": 25, 00:21:44.076 "qid": 0, 00:21:44.076 "state": "enabled", 00:21:44.076 "thread": "nvmf_tgt_poll_group_000", 00:21:44.076 "listen_address": { 00:21:44.076 "trtype": "TCP", 00:21:44.076 "adrfam": "IPv4", 00:21:44.076 "traddr": "10.0.0.2", 00:21:44.076 "trsvcid": "4420" 00:21:44.076 }, 00:21:44.076 "peer_address": { 00:21:44.076 "trtype": "TCP", 00:21:44.076 "adrfam": "IPv4", 00:21:44.076 "traddr": "10.0.0.1", 00:21:44.076 "trsvcid": "60062" 00:21:44.076 }, 00:21:44.076 "auth": { 00:21:44.076 "state": "completed", 00:21:44.076 "digest": "sha256", 00:21:44.076 "dhgroup": "ffdhe4096" 00:21:44.076 } 00:21:44.076 } 00:21:44.077 ]' 00:21:44.077 23:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.077 23:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:44.077 23:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.077 23:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:44.077 23:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.334 23:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.334 23:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.334 23:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.334 23:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:21:44.911 23:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.911 23:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:44.911 23:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.911 23:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.911 23:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.911 23:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.911 23:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:44.911 23:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:45.168 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:21:45.168 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.168 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:45.168 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:45.168 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:45.168 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.168 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.168 23:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.168 23:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.168 23:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.168 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.168 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.425 00:21:45.425 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.425 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.425 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.683 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.683 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.683 23:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.683 23:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.683 23:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.683 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.683 { 00:21:45.683 "cntlid": 27, 00:21:45.683 "qid": 0, 00:21:45.683 "state": "enabled", 00:21:45.683 "thread": "nvmf_tgt_poll_group_000", 00:21:45.683 "listen_address": { 00:21:45.683 "trtype": "TCP", 00:21:45.683 "adrfam": "IPv4", 00:21:45.683 "traddr": "10.0.0.2", 00:21:45.683 "trsvcid": "4420" 00:21:45.683 }, 00:21:45.683 "peer_address": { 00:21:45.683 "trtype": "TCP", 00:21:45.683 "adrfam": "IPv4", 00:21:45.683 "traddr": "10.0.0.1", 00:21:45.683 "trsvcid": "60092" 00:21:45.683 }, 00:21:45.683 "auth": { 00:21:45.683 "state": "completed", 00:21:45.683 "digest": "sha256", 00:21:45.683 "dhgroup": "ffdhe4096" 00:21:45.683 } 00:21:45.683 } 00:21:45.683 ]' 00:21:45.683 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.683 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:45.683 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.683 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:45.683 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.683 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.683 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.683 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.940 23:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:21:46.503 23:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.503 23:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.503 23:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.503 23:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.503 23:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.503 23:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.503 23:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:46.503 23:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:46.760 23:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:21:46.760 23:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.760 23:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:46.760 23:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:46.760 23:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:46.760 23:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.760 23:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.760 23:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.760 23:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.760 23:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.760 23:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.760 23:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.018 00:21:47.018 23:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.018 23:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.018 23:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.018 23:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.018 23:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.018 23:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.018 23:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.018 23:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.018 23:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.018 { 00:21:47.018 "cntlid": 29, 00:21:47.018 "qid": 0, 00:21:47.018 "state": "enabled", 00:21:47.018 "thread": "nvmf_tgt_poll_group_000", 00:21:47.018 "listen_address": { 00:21:47.018 "trtype": "TCP", 00:21:47.018 "adrfam": "IPv4", 00:21:47.018 "traddr": "10.0.0.2", 00:21:47.018 "trsvcid": "4420" 00:21:47.018 }, 00:21:47.018 "peer_address": { 00:21:47.018 "trtype": "TCP", 00:21:47.018 "adrfam": "IPv4", 00:21:47.018 "traddr": "10.0.0.1", 00:21:47.018 "trsvcid": "60126" 00:21:47.018 }, 00:21:47.018 "auth": { 00:21:47.018 "state": "completed", 00:21:47.018 "digest": "sha256", 00:21:47.018 "dhgroup": "ffdhe4096" 00:21:47.018 } 00:21:47.018 } 00:21:47.018 ]' 00:21:47.018 23:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.274 23:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:47.274 23:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.274 23:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:47.274 23:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.274 23:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.274 23:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.274 23:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.531 23:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:21:48.096 23:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.096 23:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:48.096 23:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.096 23:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.096 23:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.096 23:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.096 23:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:48.096 23:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:48.096 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:21:48.096 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.096 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:48.096 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:48.096 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:48.096 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.096 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:48.096 23:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.096 23:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.360 23:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.360 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.360 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.360 00:21:48.654 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.654 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.654 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.654 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.654 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.654 23:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.654 23:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.654 23:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.654 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.654 { 00:21:48.654 "cntlid": 31, 00:21:48.654 "qid": 0, 00:21:48.654 "state": "enabled", 00:21:48.654 "thread": "nvmf_tgt_poll_group_000", 00:21:48.654 "listen_address": { 00:21:48.654 "trtype": "TCP", 00:21:48.654 "adrfam": "IPv4", 00:21:48.654 "traddr": "10.0.0.2", 00:21:48.654 "trsvcid": "4420" 00:21:48.654 }, 00:21:48.654 "peer_address": { 00:21:48.654 "trtype": "TCP", 00:21:48.654 "adrfam": "IPv4", 00:21:48.654 "traddr": "10.0.0.1", 00:21:48.654 "trsvcid": "60160" 00:21:48.654 }, 00:21:48.654 "auth": { 00:21:48.654 "state": "completed", 00:21:48.654 "digest": "sha256", 00:21:48.654 "dhgroup": "ffdhe4096" 00:21:48.654 } 00:21:48.654 } 00:21:48.654 ]' 00:21:48.654 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.654 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:48.654 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.654 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:48.654 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.924 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.924 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.924 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.924 23:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:21:49.489 23:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.489 23:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:49.489 23:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.489 23:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.489 23:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.489 23:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:49.489 23:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.489 23:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:49.489 23:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:49.746 23:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:21:49.746 23:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.746 23:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:49.746 23:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:49.746 23:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:49.746 23:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.746 23:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.746 23:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.746 23:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.746 23:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.746 23:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.746 23:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.004 00:21:50.004 23:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.004 23:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.004 23:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.262 23:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.262 23:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.262 23:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.262 23:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.262 23:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.262 23:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.262 { 00:21:50.262 "cntlid": 33, 00:21:50.262 "qid": 0, 00:21:50.262 "state": "enabled", 00:21:50.262 "thread": "nvmf_tgt_poll_group_000", 00:21:50.262 "listen_address": { 00:21:50.262 "trtype": "TCP", 00:21:50.262 "adrfam": "IPv4", 00:21:50.262 "traddr": "10.0.0.2", 00:21:50.262 "trsvcid": "4420" 00:21:50.262 }, 00:21:50.262 "peer_address": { 00:21:50.262 "trtype": "TCP", 00:21:50.262 "adrfam": "IPv4", 00:21:50.262 "traddr": "10.0.0.1", 00:21:50.262 "trsvcid": "60182" 00:21:50.262 }, 00:21:50.262 "auth": { 00:21:50.262 "state": "completed", 00:21:50.262 "digest": "sha256", 00:21:50.262 "dhgroup": "ffdhe6144" 00:21:50.262 } 00:21:50.262 } 00:21:50.262 ]' 00:21:50.262 23:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.262 23:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:50.262 23:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.262 23:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:50.262 23:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.262 23:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.262 23:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.262 23:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.521 23:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:21:51.085 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.085 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.085 23:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.085 23:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.085 23:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.086 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.086 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:51.086 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:51.343 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:21:51.343 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.343 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:51.343 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:51.343 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:51.343 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.343 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.343 23:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.343 23:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.343 23:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.343 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.343 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.600 00:21:51.600 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.600 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.600 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.858 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.858 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.858 23:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.858 23:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.858 23:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.858 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:51.858 { 00:21:51.858 "cntlid": 35, 00:21:51.858 "qid": 0, 00:21:51.858 "state": "enabled", 00:21:51.858 "thread": "nvmf_tgt_poll_group_000", 00:21:51.858 "listen_address": { 00:21:51.858 "trtype": "TCP", 00:21:51.858 "adrfam": "IPv4", 00:21:51.858 "traddr": "10.0.0.2", 00:21:51.858 "trsvcid": "4420" 00:21:51.858 }, 00:21:51.858 "peer_address": { 00:21:51.858 "trtype": "TCP", 00:21:51.858 "adrfam": "IPv4", 00:21:51.858 "traddr": "10.0.0.1", 00:21:51.858 "trsvcid": "52184" 00:21:51.858 }, 00:21:51.858 "auth": { 00:21:51.858 "state": "completed", 00:21:51.858 "digest": "sha256", 00:21:51.858 "dhgroup": "ffdhe6144" 00:21:51.858 } 00:21:51.858 } 00:21:51.858 ]' 00:21:51.858 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.858 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:51.858 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.858 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:51.858 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.115 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.115 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.115 23:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.115 23:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:21:52.679 23:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.679 23:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.679 23:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.679 23:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.679 23:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.679 23:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.679 23:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:52.679 23:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:52.937 23:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:21:52.937 23:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.937 23:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:52.937 23:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:52.937 23:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:52.937 23:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.937 23:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.937 23:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.937 23:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.937 23:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.937 23:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.937 23:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.194 00:21:53.194 23:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.194 23:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.194 23:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.451 23:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.451 23:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.451 23:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.451 23:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.451 23:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.451 23:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.451 { 00:21:53.451 "cntlid": 37, 00:21:53.451 "qid": 0, 00:21:53.451 "state": "enabled", 00:21:53.451 "thread": "nvmf_tgt_poll_group_000", 00:21:53.451 "listen_address": { 00:21:53.451 "trtype": "TCP", 00:21:53.452 "adrfam": "IPv4", 00:21:53.452 "traddr": "10.0.0.2", 00:21:53.452 "trsvcid": "4420" 00:21:53.452 }, 00:21:53.452 "peer_address": { 00:21:53.452 "trtype": "TCP", 00:21:53.452 "adrfam": "IPv4", 00:21:53.452 "traddr": "10.0.0.1", 00:21:53.452 "trsvcid": "52210" 00:21:53.452 }, 00:21:53.452 "auth": { 00:21:53.452 "state": "completed", 00:21:53.452 "digest": "sha256", 00:21:53.452 "dhgroup": "ffdhe6144" 00:21:53.452 } 00:21:53.452 } 00:21:53.452 ]' 00:21:53.452 23:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.452 23:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:53.452 23:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.452 23:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:53.709 23:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.709 23:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.709 23:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.709 23:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.709 23:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:21:54.274 23:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.274 23:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:54.274 23:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.274 23:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.274 23:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.274 23:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.274 23:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:54.274 23:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:54.532 23:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:21:54.532 23:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.532 23:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:54.532 23:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:54.532 23:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:54.532 23:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.532 23:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:54.532 23:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.532 23:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.532 23:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.532 23:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:54.532 23:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:54.788 00:21:55.044 23:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.044 23:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.044 23:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.044 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.044 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.044 23:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.044 23:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.044 23:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.044 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.044 { 00:21:55.044 "cntlid": 39, 00:21:55.044 "qid": 0, 00:21:55.044 "state": "enabled", 00:21:55.044 "thread": "nvmf_tgt_poll_group_000", 00:21:55.044 "listen_address": { 00:21:55.044 "trtype": "TCP", 00:21:55.044 "adrfam": "IPv4", 00:21:55.044 "traddr": "10.0.0.2", 00:21:55.044 "trsvcid": "4420" 00:21:55.044 }, 00:21:55.044 "peer_address": { 00:21:55.044 "trtype": "TCP", 00:21:55.044 "adrfam": "IPv4", 00:21:55.044 "traddr": "10.0.0.1", 00:21:55.044 "trsvcid": "52232" 00:21:55.044 }, 00:21:55.044 "auth": { 00:21:55.044 "state": "completed", 00:21:55.044 "digest": "sha256", 00:21:55.044 "dhgroup": "ffdhe6144" 00:21:55.044 } 00:21:55.044 } 00:21:55.044 ]' 00:21:55.044 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.044 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:55.044 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.301 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:55.301 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.301 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.301 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.301 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.301 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:21:55.865 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.865 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.865 23:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.865 23:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.123 23:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.123 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.123 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.123 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:56.123 23:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:56.123 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:21:56.123 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.123 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:56.123 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:56.123 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:56.123 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.123 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.123 23:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.123 23:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.123 23:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.123 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.123 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.687 00:21:56.687 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.687 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.687 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.945 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.945 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.945 23:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.945 23:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.945 23:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.945 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.945 { 00:21:56.945 "cntlid": 41, 00:21:56.945 "qid": 0, 00:21:56.945 "state": "enabled", 00:21:56.945 "thread": "nvmf_tgt_poll_group_000", 00:21:56.945 "listen_address": { 00:21:56.945 "trtype": "TCP", 00:21:56.945 "adrfam": "IPv4", 00:21:56.945 "traddr": "10.0.0.2", 00:21:56.945 "trsvcid": "4420" 00:21:56.945 }, 00:21:56.945 "peer_address": { 00:21:56.945 "trtype": "TCP", 00:21:56.945 "adrfam": "IPv4", 00:21:56.945 "traddr": "10.0.0.1", 00:21:56.945 "trsvcid": "52268" 00:21:56.945 }, 00:21:56.945 "auth": { 00:21:56.945 "state": "completed", 00:21:56.945 "digest": "sha256", 00:21:56.945 "dhgroup": "ffdhe8192" 00:21:56.945 } 00:21:56.945 } 00:21:56.945 ]' 00:21:56.945 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.945 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:56.945 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:56.945 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:56.945 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:56.945 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.945 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.945 23:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.203 23:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:21:57.768 23:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.768 23:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:57.768 23:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.768 23:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.768 23:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.768 23:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:57.768 23:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:57.768 23:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:58.025 23:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:21:58.025 23:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.025 23:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:58.025 23:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:58.025 23:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:58.025 23:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.025 23:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.025 23:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.025 23:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.025 23:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.025 23:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.025 23:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.589 00:21:58.589 23:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.589 23:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.589 23:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.589 23:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.589 23:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.589 23:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.589 23:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.589 23:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.589 23:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.589 { 00:21:58.589 "cntlid": 43, 00:21:58.589 "qid": 0, 00:21:58.589 "state": "enabled", 00:21:58.589 "thread": "nvmf_tgt_poll_group_000", 00:21:58.589 "listen_address": { 00:21:58.589 "trtype": "TCP", 00:21:58.589 "adrfam": "IPv4", 00:21:58.589 "traddr": "10.0.0.2", 00:21:58.589 "trsvcid": "4420" 00:21:58.589 }, 00:21:58.589 "peer_address": { 00:21:58.589 "trtype": "TCP", 00:21:58.589 "adrfam": "IPv4", 00:21:58.589 "traddr": "10.0.0.1", 00:21:58.589 "trsvcid": "52296" 00:21:58.589 }, 00:21:58.589 "auth": { 00:21:58.589 "state": "completed", 00:21:58.589 "digest": "sha256", 00:21:58.589 "dhgroup": "ffdhe8192" 00:21:58.589 } 00:21:58.589 } 00:21:58.589 ]' 00:21:58.589 23:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.589 23:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:58.589 23:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.589 23:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.589 23:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.847 23:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.847 23:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.847 23:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.847 23:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:21:59.411 23:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.411 23:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:59.411 23:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.411 23:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.411 23:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.411 23:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.411 23:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:59.411 23:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:59.669 23:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:21:59.669 23:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.669 23:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:59.669 23:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:59.669 23:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:59.669 23:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.669 23:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.669 23:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.669 23:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.669 23:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.669 23:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.669 23:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.233 00:22:00.233 23:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.233 23:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.233 23:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.233 23:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.233 23:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.233 23:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.233 23:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.233 23:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.233 23:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:00.233 { 00:22:00.233 "cntlid": 45, 00:22:00.233 "qid": 0, 00:22:00.233 "state": "enabled", 00:22:00.233 "thread": "nvmf_tgt_poll_group_000", 00:22:00.233 "listen_address": { 00:22:00.233 "trtype": "TCP", 00:22:00.233 "adrfam": "IPv4", 00:22:00.233 "traddr": "10.0.0.2", 00:22:00.233 "trsvcid": "4420" 00:22:00.233 }, 00:22:00.233 "peer_address": { 00:22:00.233 "trtype": "TCP", 00:22:00.233 "adrfam": "IPv4", 00:22:00.233 "traddr": "10.0.0.1", 00:22:00.233 "trsvcid": "52318" 00:22:00.233 }, 00:22:00.233 "auth": { 00:22:00.233 "state": "completed", 00:22:00.233 "digest": "sha256", 00:22:00.233 "dhgroup": "ffdhe8192" 00:22:00.233 } 00:22:00.233 } 00:22:00.233 ]' 00:22:00.233 23:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:00.490 23:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:00.490 23:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.490 23:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.490 23:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.490 23:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.490 23:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.490 23:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.747 23:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:22:01.311 23:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.311 23:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:01.311 23:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.311 23:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.311 23:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.311 23:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.311 23:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:01.311 23:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:01.567 23:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:22:01.567 23:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.567 23:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:01.567 23:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:01.567 23:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:01.567 23:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.567 23:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:01.568 23:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.568 23:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.568 23:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.568 23:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.568 23:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.825 00:22:01.825 23:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.825 23:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.825 23:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.082 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.082 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.082 23:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.082 23:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.082 23:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.082 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.082 { 00:22:02.082 "cntlid": 47, 00:22:02.082 "qid": 0, 00:22:02.082 "state": "enabled", 00:22:02.082 "thread": "nvmf_tgt_poll_group_000", 00:22:02.082 "listen_address": { 00:22:02.082 "trtype": "TCP", 00:22:02.082 "adrfam": "IPv4", 00:22:02.082 "traddr": "10.0.0.2", 00:22:02.082 "trsvcid": "4420" 00:22:02.082 }, 00:22:02.082 "peer_address": { 00:22:02.082 "trtype": "TCP", 00:22:02.082 "adrfam": "IPv4", 00:22:02.082 "traddr": "10.0.0.1", 00:22:02.082 "trsvcid": "35226" 00:22:02.082 }, 00:22:02.082 "auth": { 00:22:02.082 "state": "completed", 00:22:02.082 "digest": "sha256", 00:22:02.082 "dhgroup": "ffdhe8192" 00:22:02.082 } 00:22:02.082 } 00:22:02.082 ]' 00:22:02.082 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.082 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:02.082 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.082 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.339 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.339 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.339 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.339 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.339 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:22:02.903 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.903 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.903 23:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.903 23:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.903 23:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.903 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:02.903 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:02.903 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.903 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:02.903 23:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:03.160 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:22:03.160 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.160 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:03.160 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:03.160 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:03.160 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.160 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.160 23:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.160 23:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.160 23:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.160 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.160 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.417 00:22:03.417 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.417 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.417 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.675 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.675 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.675 23:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.675 23:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.675 23:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.675 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:03.675 { 00:22:03.675 "cntlid": 49, 00:22:03.675 "qid": 0, 00:22:03.675 "state": "enabled", 00:22:03.675 "thread": "nvmf_tgt_poll_group_000", 00:22:03.675 "listen_address": { 00:22:03.675 "trtype": "TCP", 00:22:03.675 "adrfam": "IPv4", 00:22:03.675 "traddr": "10.0.0.2", 00:22:03.675 "trsvcid": "4420" 00:22:03.675 }, 00:22:03.675 "peer_address": { 00:22:03.675 "trtype": "TCP", 00:22:03.675 "adrfam": "IPv4", 00:22:03.675 "traddr": "10.0.0.1", 00:22:03.675 "trsvcid": "35248" 00:22:03.675 }, 00:22:03.675 "auth": { 00:22:03.675 "state": "completed", 00:22:03.675 "digest": "sha384", 00:22:03.675 "dhgroup": "null" 00:22:03.675 } 00:22:03.675 } 00:22:03.675 ]' 00:22:03.675 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.675 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:03.675 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.675 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:03.675 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.675 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.675 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.675 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.933 23:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:22:04.498 23:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.498 23:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.498 23:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.498 23:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.498 23:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.498 23:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:04.498 23:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:04.498 23:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:04.754 23:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:22:04.754 23:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.754 23:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:04.754 23:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:04.754 23:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:04.754 23:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.754 23:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.754 23:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.754 23:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.754 23:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.754 23:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.754 23:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.015 00:22:05.015 23:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.015 23:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.015 23:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.015 23:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.015 23:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.015 23:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.015 23:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.015 23:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.015 23:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.015 { 00:22:05.015 "cntlid": 51, 00:22:05.015 "qid": 0, 00:22:05.015 "state": "enabled", 00:22:05.015 "thread": "nvmf_tgt_poll_group_000", 00:22:05.015 "listen_address": { 00:22:05.015 "trtype": "TCP", 00:22:05.015 "adrfam": "IPv4", 00:22:05.015 "traddr": "10.0.0.2", 00:22:05.015 "trsvcid": "4420" 00:22:05.015 }, 00:22:05.015 "peer_address": { 00:22:05.015 "trtype": "TCP", 00:22:05.015 "adrfam": "IPv4", 00:22:05.015 "traddr": "10.0.0.1", 00:22:05.015 "trsvcid": "35278" 00:22:05.015 }, 00:22:05.015 "auth": { 00:22:05.015 "state": "completed", 00:22:05.015 "digest": "sha384", 00:22:05.015 "dhgroup": "null" 00:22:05.015 } 00:22:05.015 } 00:22:05.015 ]' 00:22:05.015 23:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:05.309 23:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:05.309 23:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:05.309 23:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:05.309 23:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:05.309 23:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.309 23:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.309 23:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.309 23:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:22:05.877 23:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.877 23:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:05.877 23:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.877 23:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.877 23:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.877 23:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:05.877 23:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:05.877 23:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:06.135 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:22:06.135 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:06.135 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:06.135 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:06.135 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:06.135 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.135 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.135 23:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.135 23:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.135 23:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.135 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.135 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.394 00:22:06.394 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.394 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.394 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.653 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.653 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.653 23:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.653 23:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.653 23:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.653 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.653 { 00:22:06.653 "cntlid": 53, 00:22:06.653 "qid": 0, 00:22:06.653 "state": "enabled", 00:22:06.653 "thread": "nvmf_tgt_poll_group_000", 00:22:06.653 "listen_address": { 00:22:06.653 "trtype": "TCP", 00:22:06.653 "adrfam": "IPv4", 00:22:06.653 "traddr": "10.0.0.2", 00:22:06.653 "trsvcid": "4420" 00:22:06.653 }, 00:22:06.653 "peer_address": { 00:22:06.653 "trtype": "TCP", 00:22:06.653 "adrfam": "IPv4", 00:22:06.653 "traddr": "10.0.0.1", 00:22:06.653 "trsvcid": "35308" 00:22:06.653 }, 00:22:06.653 "auth": { 00:22:06.653 "state": "completed", 00:22:06.653 "digest": "sha384", 00:22:06.653 "dhgroup": "null" 00:22:06.653 } 00:22:06.653 } 00:22:06.653 ]' 00:22:06.653 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.653 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:06.653 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.653 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:06.653 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.653 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.653 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.653 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.912 23:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:22:07.480 23:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.480 23:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.480 23:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.480 23:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.480 23:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.480 23:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.480 23:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:07.480 23:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:07.740 23:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:22:07.740 23:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.740 23:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:07.740 23:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:07.740 23:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:07.740 23:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.740 23:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:07.740 23:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.740 23:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.740 23:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.740 23:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.740 23:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.999 00:22:07.999 23:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:07.999 23:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:07.999 23:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.999 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.999 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.999 23:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.999 23:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.999 23:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.999 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:07.999 { 00:22:07.999 "cntlid": 55, 00:22:07.999 "qid": 0, 00:22:07.999 "state": "enabled", 00:22:07.999 "thread": "nvmf_tgt_poll_group_000", 00:22:07.999 "listen_address": { 00:22:07.999 "trtype": "TCP", 00:22:07.999 "adrfam": "IPv4", 00:22:07.999 "traddr": "10.0.0.2", 00:22:07.999 "trsvcid": "4420" 00:22:07.999 }, 00:22:07.999 "peer_address": { 00:22:07.999 "trtype": "TCP", 00:22:07.999 "adrfam": "IPv4", 00:22:07.999 "traddr": "10.0.0.1", 00:22:07.999 "trsvcid": "35320" 00:22:07.999 }, 00:22:07.999 "auth": { 00:22:08.000 "state": "completed", 00:22:08.000 "digest": "sha384", 00:22:08.000 "dhgroup": "null" 00:22:08.000 } 00:22:08.000 } 00:22:08.000 ]' 00:22:08.000 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.258 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:08.258 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.258 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:08.259 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.259 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.259 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.259 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.259 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:22:08.827 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.827 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:08.827 23:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.827 23:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.827 23:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.827 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.827 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:08.827 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:08.827 23:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:09.086 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:22:09.086 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.086 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:09.086 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:09.086 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:09.086 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.086 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.086 23:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.086 23:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.086 23:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.086 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.086 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.345 00:22:09.345 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:09.345 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:09.345 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.603 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.603 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.603 23:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.603 23:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.603 23:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.603 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:09.603 { 00:22:09.603 "cntlid": 57, 00:22:09.603 "qid": 0, 00:22:09.603 "state": "enabled", 00:22:09.603 "thread": "nvmf_tgt_poll_group_000", 00:22:09.603 "listen_address": { 00:22:09.603 "trtype": "TCP", 00:22:09.603 "adrfam": "IPv4", 00:22:09.603 "traddr": "10.0.0.2", 00:22:09.603 "trsvcid": "4420" 00:22:09.603 }, 00:22:09.603 "peer_address": { 00:22:09.603 "trtype": "TCP", 00:22:09.603 "adrfam": "IPv4", 00:22:09.603 "traddr": "10.0.0.1", 00:22:09.603 "trsvcid": "35348" 00:22:09.603 }, 00:22:09.603 "auth": { 00:22:09.603 "state": "completed", 00:22:09.603 "digest": "sha384", 00:22:09.603 "dhgroup": "ffdhe2048" 00:22:09.603 } 00:22:09.603 } 00:22:09.603 ]' 00:22:09.603 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:09.603 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:09.603 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:09.603 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:09.603 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.603 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.603 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.603 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.862 23:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:22:10.429 23:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.429 23:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:10.429 23:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.429 23:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.429 23:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.429 23:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:10.429 23:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:10.429 23:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:10.688 23:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:22:10.688 23:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:10.688 23:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:10.688 23:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:10.688 23:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:10.688 23:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.688 23:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.688 23:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.688 23:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.688 23:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.688 23:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.688 23:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.946 00:22:10.946 23:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.946 23:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.946 23:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.205 23:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.205 23:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.205 23:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.205 23:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.205 23:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.205 23:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:11.205 { 00:22:11.205 "cntlid": 59, 00:22:11.205 "qid": 0, 00:22:11.205 "state": "enabled", 00:22:11.205 "thread": "nvmf_tgt_poll_group_000", 00:22:11.205 "listen_address": { 00:22:11.205 "trtype": "TCP", 00:22:11.205 "adrfam": "IPv4", 00:22:11.205 "traddr": "10.0.0.2", 00:22:11.205 "trsvcid": "4420" 00:22:11.205 }, 00:22:11.205 "peer_address": { 00:22:11.205 "trtype": "TCP", 00:22:11.205 "adrfam": "IPv4", 00:22:11.205 "traddr": "10.0.0.1", 00:22:11.205 "trsvcid": "35368" 00:22:11.205 }, 00:22:11.205 "auth": { 00:22:11.205 "state": "completed", 00:22:11.205 "digest": "sha384", 00:22:11.205 "dhgroup": "ffdhe2048" 00:22:11.205 } 00:22:11.205 } 00:22:11.205 ]' 00:22:11.205 23:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:11.205 23:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:11.205 23:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:11.205 23:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:11.205 23:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:11.205 23:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.205 23:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.205 23:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.463 23:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:22:12.029 23:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.029 23:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:12.029 23:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.029 23:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.029 23:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.029 23:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:12.029 23:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:12.029 23:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:12.029 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:22:12.029 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.029 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:12.029 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:12.029 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:12.029 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.029 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.029 23:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.029 23:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.029 23:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.029 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.029 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.285 00:22:12.285 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.285 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:12.285 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.544 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.544 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.544 23:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.544 23:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.544 23:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.544 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.544 { 00:22:12.544 "cntlid": 61, 00:22:12.544 "qid": 0, 00:22:12.544 "state": "enabled", 00:22:12.544 "thread": "nvmf_tgt_poll_group_000", 00:22:12.544 "listen_address": { 00:22:12.544 "trtype": "TCP", 00:22:12.544 "adrfam": "IPv4", 00:22:12.544 "traddr": "10.0.0.2", 00:22:12.544 "trsvcid": "4420" 00:22:12.544 }, 00:22:12.544 "peer_address": { 00:22:12.544 "trtype": "TCP", 00:22:12.544 "adrfam": "IPv4", 00:22:12.544 "traddr": "10.0.0.1", 00:22:12.544 "trsvcid": "56900" 00:22:12.544 }, 00:22:12.544 "auth": { 00:22:12.544 "state": "completed", 00:22:12.544 "digest": "sha384", 00:22:12.544 "dhgroup": "ffdhe2048" 00:22:12.544 } 00:22:12.544 } 00:22:12.544 ]' 00:22:12.544 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.544 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:12.544 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.544 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:12.544 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.544 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.544 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.544 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.802 23:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:22:13.370 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.370 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:13.370 23:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.370 23:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.370 23:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.370 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:13.370 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:13.370 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:13.628 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:22:13.628 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:13.628 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:13.628 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:13.629 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:13.629 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.629 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:13.629 23:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.629 23:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.629 23:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.629 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:13.629 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:13.887 00:22:13.887 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:13.887 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:13.887 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.146 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.146 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.146 23:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.146 23:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.146 23:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.146 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.146 { 00:22:14.146 "cntlid": 63, 00:22:14.146 "qid": 0, 00:22:14.146 "state": "enabled", 00:22:14.146 "thread": "nvmf_tgt_poll_group_000", 00:22:14.146 "listen_address": { 00:22:14.146 "trtype": "TCP", 00:22:14.146 "adrfam": "IPv4", 00:22:14.146 "traddr": "10.0.0.2", 00:22:14.146 "trsvcid": "4420" 00:22:14.146 }, 00:22:14.146 "peer_address": { 00:22:14.146 "trtype": "TCP", 00:22:14.146 "adrfam": "IPv4", 00:22:14.146 "traddr": "10.0.0.1", 00:22:14.146 "trsvcid": "56930" 00:22:14.146 }, 00:22:14.146 "auth": { 00:22:14.146 "state": "completed", 00:22:14.146 "digest": "sha384", 00:22:14.146 "dhgroup": "ffdhe2048" 00:22:14.146 } 00:22:14.146 } 00:22:14.146 ]' 00:22:14.146 23:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.146 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:14.146 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.146 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:14.146 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.146 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.146 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.146 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.405 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.973 23:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.973 23:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.973 23:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.974 23:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.232 00:22:15.232 23:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:15.232 23:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:15.232 23:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.491 23:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.491 23:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.491 23:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.491 23:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.491 23:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.491 23:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.491 { 00:22:15.491 "cntlid": 65, 00:22:15.491 "qid": 0, 00:22:15.491 "state": "enabled", 00:22:15.491 "thread": "nvmf_tgt_poll_group_000", 00:22:15.492 "listen_address": { 00:22:15.492 "trtype": "TCP", 00:22:15.492 "adrfam": "IPv4", 00:22:15.492 "traddr": "10.0.0.2", 00:22:15.492 "trsvcid": "4420" 00:22:15.492 }, 00:22:15.492 "peer_address": { 00:22:15.492 "trtype": "TCP", 00:22:15.492 "adrfam": "IPv4", 00:22:15.492 "traddr": "10.0.0.1", 00:22:15.492 "trsvcid": "56946" 00:22:15.492 }, 00:22:15.492 "auth": { 00:22:15.492 "state": "completed", 00:22:15.492 "digest": "sha384", 00:22:15.492 "dhgroup": "ffdhe3072" 00:22:15.492 } 00:22:15.492 } 00:22:15.492 ]' 00:22:15.492 23:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.492 23:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:15.492 23:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:15.492 23:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:15.492 23:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:15.750 23:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.750 23:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.750 23:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.750 23:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:22:16.318 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.318 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:16.318 23:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.318 23:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.318 23:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.318 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:16.318 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:16.318 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:16.577 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:22:16.577 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.577 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:16.577 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:16.577 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:16.577 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.577 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.577 23:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.577 23:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.577 23:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.577 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.577 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.836 00:22:16.836 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:16.836 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:16.836 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.095 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.095 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.095 23:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.095 23:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.095 23:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.095 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:17.095 { 00:22:17.095 "cntlid": 67, 00:22:17.095 "qid": 0, 00:22:17.095 "state": "enabled", 00:22:17.095 "thread": "nvmf_tgt_poll_group_000", 00:22:17.095 "listen_address": { 00:22:17.095 "trtype": "TCP", 00:22:17.095 "adrfam": "IPv4", 00:22:17.095 "traddr": "10.0.0.2", 00:22:17.095 "trsvcid": "4420" 00:22:17.095 }, 00:22:17.095 "peer_address": { 00:22:17.095 "trtype": "TCP", 00:22:17.095 "adrfam": "IPv4", 00:22:17.095 "traddr": "10.0.0.1", 00:22:17.095 "trsvcid": "56972" 00:22:17.095 }, 00:22:17.095 "auth": { 00:22:17.095 "state": "completed", 00:22:17.095 "digest": "sha384", 00:22:17.095 "dhgroup": "ffdhe3072" 00:22:17.095 } 00:22:17.095 } 00:22:17.095 ]' 00:22:17.095 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:17.095 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:17.095 23:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:17.095 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:17.095 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:17.095 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.095 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.095 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.354 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:22:17.921 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.921 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:17.921 23:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.921 23:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.921 23:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.921 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:17.921 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:17.921 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:17.922 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:22:17.922 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:17.922 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:17.922 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:17.922 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:17.922 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.922 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.922 23:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.922 23:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.922 23:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.922 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.922 23:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.181 00:22:18.181 23:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.181 23:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.181 23:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.441 23:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.441 23:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.441 23:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.441 23:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.441 23:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.441 23:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.441 { 00:22:18.441 "cntlid": 69, 00:22:18.441 "qid": 0, 00:22:18.441 "state": "enabled", 00:22:18.441 "thread": "nvmf_tgt_poll_group_000", 00:22:18.441 "listen_address": { 00:22:18.441 "trtype": "TCP", 00:22:18.441 "adrfam": "IPv4", 00:22:18.441 "traddr": "10.0.0.2", 00:22:18.441 "trsvcid": "4420" 00:22:18.441 }, 00:22:18.441 "peer_address": { 00:22:18.441 "trtype": "TCP", 00:22:18.441 "adrfam": "IPv4", 00:22:18.441 "traddr": "10.0.0.1", 00:22:18.441 "trsvcid": "57008" 00:22:18.441 }, 00:22:18.441 "auth": { 00:22:18.441 "state": "completed", 00:22:18.441 "digest": "sha384", 00:22:18.441 "dhgroup": "ffdhe3072" 00:22:18.441 } 00:22:18.441 } 00:22:18.441 ]' 00:22:18.441 23:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.441 23:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:18.441 23:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.700 23:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:18.700 23:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.700 23:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.700 23:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.700 23:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.700 23:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:22:19.268 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.268 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:19.268 23:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.268 23:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.268 23:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.268 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:19.268 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:19.268 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:19.527 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:22:19.527 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:19.527 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:19.527 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:19.527 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:19.527 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.527 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:19.527 23:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.527 23:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.527 23:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.527 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.527 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.786 00:22:19.786 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:19.786 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:19.786 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.045 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.045 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.045 23:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.045 23:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.045 23:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.045 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:20.045 { 00:22:20.045 "cntlid": 71, 00:22:20.045 "qid": 0, 00:22:20.045 "state": "enabled", 00:22:20.045 "thread": "nvmf_tgt_poll_group_000", 00:22:20.045 "listen_address": { 00:22:20.045 "trtype": "TCP", 00:22:20.045 "adrfam": "IPv4", 00:22:20.045 "traddr": "10.0.0.2", 00:22:20.045 "trsvcid": "4420" 00:22:20.045 }, 00:22:20.045 "peer_address": { 00:22:20.045 "trtype": "TCP", 00:22:20.045 "adrfam": "IPv4", 00:22:20.045 "traddr": "10.0.0.1", 00:22:20.045 "trsvcid": "57044" 00:22:20.045 }, 00:22:20.045 "auth": { 00:22:20.045 "state": "completed", 00:22:20.045 "digest": "sha384", 00:22:20.045 "dhgroup": "ffdhe3072" 00:22:20.045 } 00:22:20.045 } 00:22:20.045 ]' 00:22:20.045 23:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:20.045 23:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:20.045 23:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:20.045 23:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:20.045 23:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:20.045 23:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.045 23:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.045 23:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.304 23:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:22:20.873 23:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.873 23:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:20.873 23:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.873 23:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.873 23:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.873 23:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:20.873 23:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.873 23:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:20.873 23:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:21.132 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:22:21.132 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:21.132 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:21.132 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:21.132 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:21.132 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.132 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.132 23:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.132 23:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.132 23:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.132 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.132 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.391 00:22:21.391 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:21.391 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:21.391 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.652 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.652 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.652 23:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.652 23:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.652 23:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.652 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:21.652 { 00:22:21.652 "cntlid": 73, 00:22:21.652 "qid": 0, 00:22:21.652 "state": "enabled", 00:22:21.652 "thread": "nvmf_tgt_poll_group_000", 00:22:21.652 "listen_address": { 00:22:21.652 "trtype": "TCP", 00:22:21.652 "adrfam": "IPv4", 00:22:21.652 "traddr": "10.0.0.2", 00:22:21.652 "trsvcid": "4420" 00:22:21.652 }, 00:22:21.652 "peer_address": { 00:22:21.652 "trtype": "TCP", 00:22:21.652 "adrfam": "IPv4", 00:22:21.652 "traddr": "10.0.0.1", 00:22:21.652 "trsvcid": "57074" 00:22:21.652 }, 00:22:21.652 "auth": { 00:22:21.652 "state": "completed", 00:22:21.652 "digest": "sha384", 00:22:21.652 "dhgroup": "ffdhe4096" 00:22:21.652 } 00:22:21.652 } 00:22:21.652 ]' 00:22:21.652 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:21.652 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:21.652 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:21.652 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:21.652 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:21.652 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.652 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.652 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.978 23:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.546 23:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.805 00:22:22.805 23:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:22.805 23:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:22.805 23:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.064 23:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.064 23:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.064 23:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.064 23:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.064 23:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.064 23:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.064 { 00:22:23.064 "cntlid": 75, 00:22:23.064 "qid": 0, 00:22:23.064 "state": "enabled", 00:22:23.064 "thread": "nvmf_tgt_poll_group_000", 00:22:23.064 "listen_address": { 00:22:23.064 "trtype": "TCP", 00:22:23.064 "adrfam": "IPv4", 00:22:23.064 "traddr": "10.0.0.2", 00:22:23.064 "trsvcid": "4420" 00:22:23.064 }, 00:22:23.064 "peer_address": { 00:22:23.064 "trtype": "TCP", 00:22:23.064 "adrfam": "IPv4", 00:22:23.064 "traddr": "10.0.0.1", 00:22:23.064 "trsvcid": "36478" 00:22:23.064 }, 00:22:23.064 "auth": { 00:22:23.064 "state": "completed", 00:22:23.064 "digest": "sha384", 00:22:23.064 "dhgroup": "ffdhe4096" 00:22:23.064 } 00:22:23.064 } 00:22:23.064 ]' 00:22:23.064 23:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.064 23:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:23.064 23:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.064 23:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:23.323 23:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.323 23:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.323 23:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.323 23:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.323 23:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:22:23.890 23:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.890 23:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:23.890 23:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.890 23:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.890 23:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.890 23:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:23.890 23:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:23.890 23:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:24.149 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:22:24.149 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:24.149 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:24.149 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:24.149 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:24.149 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.149 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.149 23:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.149 23:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.149 23:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.149 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.149 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.407 00:22:24.407 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:24.407 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:24.407 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.665 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.665 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.665 23:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.665 23:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.665 23:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.665 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:24.665 { 00:22:24.665 "cntlid": 77, 00:22:24.665 "qid": 0, 00:22:24.665 "state": "enabled", 00:22:24.665 "thread": "nvmf_tgt_poll_group_000", 00:22:24.665 "listen_address": { 00:22:24.665 "trtype": "TCP", 00:22:24.665 "adrfam": "IPv4", 00:22:24.665 "traddr": "10.0.0.2", 00:22:24.665 "trsvcid": "4420" 00:22:24.665 }, 00:22:24.665 "peer_address": { 00:22:24.665 "trtype": "TCP", 00:22:24.665 "adrfam": "IPv4", 00:22:24.665 "traddr": "10.0.0.1", 00:22:24.665 "trsvcid": "36512" 00:22:24.665 }, 00:22:24.665 "auth": { 00:22:24.665 "state": "completed", 00:22:24.665 "digest": "sha384", 00:22:24.665 "dhgroup": "ffdhe4096" 00:22:24.665 } 00:22:24.665 } 00:22:24.665 ]' 00:22:24.665 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:24.665 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:24.665 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:24.665 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:24.665 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:24.665 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.665 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.665 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.923 23:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:22:25.490 23:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.490 23:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:25.490 23:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.490 23:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.490 23:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.490 23:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:25.490 23:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:25.490 23:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:25.747 23:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:22:25.747 23:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:25.747 23:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:25.747 23:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:25.747 23:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:25.747 23:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.747 23:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:25.747 23:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.747 23:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.747 23:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.747 23:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:25.747 23:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.006 00:22:26.006 23:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:26.006 23:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:26.006 23:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.265 23:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.265 23:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.266 23:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.266 23:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.266 23:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.266 23:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:26.266 { 00:22:26.266 "cntlid": 79, 00:22:26.266 "qid": 0, 00:22:26.266 "state": "enabled", 00:22:26.266 "thread": "nvmf_tgt_poll_group_000", 00:22:26.266 "listen_address": { 00:22:26.266 "trtype": "TCP", 00:22:26.266 "adrfam": "IPv4", 00:22:26.266 "traddr": "10.0.0.2", 00:22:26.266 "trsvcid": "4420" 00:22:26.266 }, 00:22:26.266 "peer_address": { 00:22:26.266 "trtype": "TCP", 00:22:26.266 "adrfam": "IPv4", 00:22:26.266 "traddr": "10.0.0.1", 00:22:26.266 "trsvcid": "36540" 00:22:26.266 }, 00:22:26.266 "auth": { 00:22:26.266 "state": "completed", 00:22:26.266 "digest": "sha384", 00:22:26.266 "dhgroup": "ffdhe4096" 00:22:26.266 } 00:22:26.266 } 00:22:26.266 ]' 00:22:26.266 23:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:26.266 23:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:26.266 23:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:26.266 23:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:26.266 23:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:26.266 23:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.266 23:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.266 23:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.525 23:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:22:27.093 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.093 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:27.093 23:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.093 23:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.093 23:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.093 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:27.093 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:27.093 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:27.093 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:27.353 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:22:27.353 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.353 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:27.353 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:27.353 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:27.353 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.353 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.353 23:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.353 23:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.353 23:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.353 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.353 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.611 00:22:27.611 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:27.611 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:27.611 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.870 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.870 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.870 23:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.870 23:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.870 23:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.870 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:27.870 { 00:22:27.870 "cntlid": 81, 00:22:27.870 "qid": 0, 00:22:27.870 "state": "enabled", 00:22:27.870 "thread": "nvmf_tgt_poll_group_000", 00:22:27.870 "listen_address": { 00:22:27.870 "trtype": "TCP", 00:22:27.870 "adrfam": "IPv4", 00:22:27.870 "traddr": "10.0.0.2", 00:22:27.870 "trsvcid": "4420" 00:22:27.870 }, 00:22:27.870 "peer_address": { 00:22:27.870 "trtype": "TCP", 00:22:27.870 "adrfam": "IPv4", 00:22:27.870 "traddr": "10.0.0.1", 00:22:27.870 "trsvcid": "36558" 00:22:27.870 }, 00:22:27.870 "auth": { 00:22:27.870 "state": "completed", 00:22:27.870 "digest": "sha384", 00:22:27.870 "dhgroup": "ffdhe6144" 00:22:27.870 } 00:22:27.870 } 00:22:27.870 ]' 00:22:27.870 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:27.870 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:27.870 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:27.870 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:27.870 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:27.870 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.870 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.870 23:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.130 23:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:22:28.699 23:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.699 23:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:28.699 23:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.699 23:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.699 23:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.699 23:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:28.699 23:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:28.699 23:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:28.958 23:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:22:28.958 23:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:28.958 23:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:28.958 23:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:28.958 23:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:28.958 23:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.958 23:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.958 23:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.958 23:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.958 23:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.958 23:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.958 23:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.217 00:22:29.217 23:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:29.217 23:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:29.217 23:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.476 23:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.476 23:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.476 23:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.476 23:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.476 23:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.476 23:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:29.476 { 00:22:29.476 "cntlid": 83, 00:22:29.476 "qid": 0, 00:22:29.476 "state": "enabled", 00:22:29.476 "thread": "nvmf_tgt_poll_group_000", 00:22:29.476 "listen_address": { 00:22:29.476 "trtype": "TCP", 00:22:29.476 "adrfam": "IPv4", 00:22:29.476 "traddr": "10.0.0.2", 00:22:29.476 "trsvcid": "4420" 00:22:29.476 }, 00:22:29.476 "peer_address": { 00:22:29.476 "trtype": "TCP", 00:22:29.476 "adrfam": "IPv4", 00:22:29.476 "traddr": "10.0.0.1", 00:22:29.476 "trsvcid": "36604" 00:22:29.476 }, 00:22:29.476 "auth": { 00:22:29.476 "state": "completed", 00:22:29.476 "digest": "sha384", 00:22:29.476 "dhgroup": "ffdhe6144" 00:22:29.476 } 00:22:29.476 } 00:22:29.476 ]' 00:22:29.476 23:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:29.476 23:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:29.476 23:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:29.476 23:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:29.476 23:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:29.476 23:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.476 23:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.476 23:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.735 23:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:22:30.305 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.305 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:30.305 23:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.305 23:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.305 23:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.305 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:30.305 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:30.305 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:30.564 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:22:30.564 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:30.564 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:30.564 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:30.564 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:30.564 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.564 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.564 23:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.564 23:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.564 23:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.564 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.564 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.822 00:22:30.823 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.823 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:30.823 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.082 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.082 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.082 23:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.082 23:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.082 23:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.082 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:31.082 { 00:22:31.082 "cntlid": 85, 00:22:31.082 "qid": 0, 00:22:31.082 "state": "enabled", 00:22:31.082 "thread": "nvmf_tgt_poll_group_000", 00:22:31.082 "listen_address": { 00:22:31.082 "trtype": "TCP", 00:22:31.082 "adrfam": "IPv4", 00:22:31.082 "traddr": "10.0.0.2", 00:22:31.082 "trsvcid": "4420" 00:22:31.082 }, 00:22:31.082 "peer_address": { 00:22:31.082 "trtype": "TCP", 00:22:31.082 "adrfam": "IPv4", 00:22:31.082 "traddr": "10.0.0.1", 00:22:31.082 "trsvcid": "36620" 00:22:31.082 }, 00:22:31.082 "auth": { 00:22:31.082 "state": "completed", 00:22:31.082 "digest": "sha384", 00:22:31.082 "dhgroup": "ffdhe6144" 00:22:31.082 } 00:22:31.082 } 00:22:31.082 ]' 00:22:31.082 23:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:31.082 23:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:31.082 23:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:31.082 23:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:31.082 23:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:31.082 23:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.082 23:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.082 23:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.341 23:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:22:31.908 23:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.908 23:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:31.908 23:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.908 23:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.908 23:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.908 23:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:31.908 23:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:31.908 23:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:32.168 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:22:32.168 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:32.168 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:32.168 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:32.168 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:32.168 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.168 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:32.168 23:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.168 23:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.168 23:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.168 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.168 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.427 00:22:32.427 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:32.427 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.427 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:32.685 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.685 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.685 23:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.685 23:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.685 23:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.685 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:32.685 { 00:22:32.685 "cntlid": 87, 00:22:32.685 "qid": 0, 00:22:32.685 "state": "enabled", 00:22:32.685 "thread": "nvmf_tgt_poll_group_000", 00:22:32.685 "listen_address": { 00:22:32.685 "trtype": "TCP", 00:22:32.685 "adrfam": "IPv4", 00:22:32.685 "traddr": "10.0.0.2", 00:22:32.685 "trsvcid": "4420" 00:22:32.685 }, 00:22:32.685 "peer_address": { 00:22:32.685 "trtype": "TCP", 00:22:32.685 "adrfam": "IPv4", 00:22:32.685 "traddr": "10.0.0.1", 00:22:32.685 "trsvcid": "51778" 00:22:32.685 }, 00:22:32.685 "auth": { 00:22:32.685 "state": "completed", 00:22:32.685 "digest": "sha384", 00:22:32.685 "dhgroup": "ffdhe6144" 00:22:32.685 } 00:22:32.685 } 00:22:32.685 ]' 00:22:32.685 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:32.685 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:32.685 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:32.685 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:32.685 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:32.685 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.685 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.685 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.944 23:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:22:33.511 23:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.511 23:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:33.511 23:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.511 23:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.511 23:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.511 23:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:33.511 23:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:33.511 23:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:33.511 23:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:33.769 23:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:22:33.769 23:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:33.769 23:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:33.769 23:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:33.769 23:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:33.769 23:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.769 23:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.769 23:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.769 23:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.769 23:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.769 23:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.769 23:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.336 00:22:34.336 23:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:34.336 23:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:34.336 23:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.336 23:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.336 23:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.336 23:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.336 23:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.336 23:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.336 23:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:34.336 { 00:22:34.336 "cntlid": 89, 00:22:34.336 "qid": 0, 00:22:34.336 "state": "enabled", 00:22:34.336 "thread": "nvmf_tgt_poll_group_000", 00:22:34.336 "listen_address": { 00:22:34.336 "trtype": "TCP", 00:22:34.336 "adrfam": "IPv4", 00:22:34.336 "traddr": "10.0.0.2", 00:22:34.336 "trsvcid": "4420" 00:22:34.336 }, 00:22:34.336 "peer_address": { 00:22:34.336 "trtype": "TCP", 00:22:34.336 "adrfam": "IPv4", 00:22:34.336 "traddr": "10.0.0.1", 00:22:34.336 "trsvcid": "51808" 00:22:34.336 }, 00:22:34.336 "auth": { 00:22:34.336 "state": "completed", 00:22:34.337 "digest": "sha384", 00:22:34.337 "dhgroup": "ffdhe8192" 00:22:34.337 } 00:22:34.337 } 00:22:34.337 ]' 00:22:34.337 23:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:34.337 23:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:34.337 23:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:34.595 23:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:34.595 23:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:34.595 23:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.595 23:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.595 23:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.595 23:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:22:35.162 23:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.420 23:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.986 00:22:35.986 23:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:35.986 23:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:35.986 23:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.245 23:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.245 23:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.245 23:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.245 23:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.245 23:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.245 23:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:36.245 { 00:22:36.245 "cntlid": 91, 00:22:36.245 "qid": 0, 00:22:36.245 "state": "enabled", 00:22:36.245 "thread": "nvmf_tgt_poll_group_000", 00:22:36.245 "listen_address": { 00:22:36.245 "trtype": "TCP", 00:22:36.245 "adrfam": "IPv4", 00:22:36.245 "traddr": "10.0.0.2", 00:22:36.245 "trsvcid": "4420" 00:22:36.245 }, 00:22:36.245 "peer_address": { 00:22:36.245 "trtype": "TCP", 00:22:36.245 "adrfam": "IPv4", 00:22:36.245 "traddr": "10.0.0.1", 00:22:36.245 "trsvcid": "51850" 00:22:36.245 }, 00:22:36.245 "auth": { 00:22:36.245 "state": "completed", 00:22:36.245 "digest": "sha384", 00:22:36.245 "dhgroup": "ffdhe8192" 00:22:36.245 } 00:22:36.245 } 00:22:36.245 ]' 00:22:36.245 23:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:36.245 23:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:36.245 23:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:36.245 23:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:36.245 23:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:36.245 23:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.245 23:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.245 23:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.503 23:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:22:37.070 23:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.070 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.070 23:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.070 23:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.070 23:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.070 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:37.070 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:37.070 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:37.328 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:22:37.328 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:37.328 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:37.328 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:37.328 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:37.328 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.328 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.328 23:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.328 23:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.328 23:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.328 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.328 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.896 00:22:37.896 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:37.896 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:37.896 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.896 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.896 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.896 23:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.896 23:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.896 23:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.896 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:37.896 { 00:22:37.896 "cntlid": 93, 00:22:37.896 "qid": 0, 00:22:37.896 "state": "enabled", 00:22:37.896 "thread": "nvmf_tgt_poll_group_000", 00:22:37.896 "listen_address": { 00:22:37.896 "trtype": "TCP", 00:22:37.896 "adrfam": "IPv4", 00:22:37.896 "traddr": "10.0.0.2", 00:22:37.896 "trsvcid": "4420" 00:22:37.896 }, 00:22:37.896 "peer_address": { 00:22:37.896 "trtype": "TCP", 00:22:37.896 "adrfam": "IPv4", 00:22:37.896 "traddr": "10.0.0.1", 00:22:37.896 "trsvcid": "51876" 00:22:37.896 }, 00:22:37.896 "auth": { 00:22:37.896 "state": "completed", 00:22:37.896 "digest": "sha384", 00:22:37.896 "dhgroup": "ffdhe8192" 00:22:37.896 } 00:22:37.896 } 00:22:37.896 ]' 00:22:37.896 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:37.896 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:37.896 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:37.896 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:37.896 23:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:38.156 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.156 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.156 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.156 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:22:38.762 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.762 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:38.762 23:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.762 23:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.762 23:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.762 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:38.762 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:38.762 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:39.021 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:22:39.021 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:39.021 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:39.021 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:39.021 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:39.021 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.021 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:39.021 23:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.021 23:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.021 23:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.021 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:39.021 23:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:39.588 00:22:39.588 23:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:39.588 23:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:39.588 23:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.588 23:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.588 23:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.588 23:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.588 23:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.588 23:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.588 23:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:39.588 { 00:22:39.588 "cntlid": 95, 00:22:39.588 "qid": 0, 00:22:39.588 "state": "enabled", 00:22:39.588 "thread": "nvmf_tgt_poll_group_000", 00:22:39.588 "listen_address": { 00:22:39.588 "trtype": "TCP", 00:22:39.588 "adrfam": "IPv4", 00:22:39.588 "traddr": "10.0.0.2", 00:22:39.588 "trsvcid": "4420" 00:22:39.588 }, 00:22:39.588 "peer_address": { 00:22:39.588 "trtype": "TCP", 00:22:39.588 "adrfam": "IPv4", 00:22:39.588 "traddr": "10.0.0.1", 00:22:39.588 "trsvcid": "51904" 00:22:39.588 }, 00:22:39.588 "auth": { 00:22:39.588 "state": "completed", 00:22:39.588 "digest": "sha384", 00:22:39.588 "dhgroup": "ffdhe8192" 00:22:39.588 } 00:22:39.588 } 00:22:39.588 ]' 00:22:39.588 23:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:39.847 23:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:39.847 23:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:39.847 23:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:39.847 23:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:39.847 23:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.847 23:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.847 23:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.105 23:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.674 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.675 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.933 00:22:40.933 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:40.933 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:40.933 23:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.193 23:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.193 23:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.193 23:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.193 23:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.193 23:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.193 23:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:41.193 { 00:22:41.193 "cntlid": 97, 00:22:41.193 "qid": 0, 00:22:41.193 "state": "enabled", 00:22:41.193 "thread": "nvmf_tgt_poll_group_000", 00:22:41.193 "listen_address": { 00:22:41.193 "trtype": "TCP", 00:22:41.193 "adrfam": "IPv4", 00:22:41.193 "traddr": "10.0.0.2", 00:22:41.193 "trsvcid": "4420" 00:22:41.193 }, 00:22:41.193 "peer_address": { 00:22:41.193 "trtype": "TCP", 00:22:41.193 "adrfam": "IPv4", 00:22:41.193 "traddr": "10.0.0.1", 00:22:41.193 "trsvcid": "51940" 00:22:41.193 }, 00:22:41.193 "auth": { 00:22:41.193 "state": "completed", 00:22:41.193 "digest": "sha512", 00:22:41.193 "dhgroup": "null" 00:22:41.193 } 00:22:41.193 } 00:22:41.193 ]' 00:22:41.193 23:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:41.193 23:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:41.193 23:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:41.193 23:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:41.193 23:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:41.193 23:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.193 23:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.193 23:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.451 23:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:22:42.015 23:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.015 23:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:42.015 23:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.015 23:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.015 23:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.015 23:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:42.015 23:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:42.015 23:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:42.273 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:22:42.273 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:42.273 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:42.273 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:42.273 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:42.273 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.273 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.273 23:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.273 23:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.273 23:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.273 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.273 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.532 00:22:42.532 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:42.532 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:42.532 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.532 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.532 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.532 23:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.532 23:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.532 23:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.532 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:42.532 { 00:22:42.532 "cntlid": 99, 00:22:42.532 "qid": 0, 00:22:42.532 "state": "enabled", 00:22:42.532 "thread": "nvmf_tgt_poll_group_000", 00:22:42.532 "listen_address": { 00:22:42.532 "trtype": "TCP", 00:22:42.532 "adrfam": "IPv4", 00:22:42.532 "traddr": "10.0.0.2", 00:22:42.532 "trsvcid": "4420" 00:22:42.532 }, 00:22:42.532 "peer_address": { 00:22:42.532 "trtype": "TCP", 00:22:42.532 "adrfam": "IPv4", 00:22:42.532 "traddr": "10.0.0.1", 00:22:42.532 "trsvcid": "43232" 00:22:42.532 }, 00:22:42.532 "auth": { 00:22:42.532 "state": "completed", 00:22:42.532 "digest": "sha512", 00:22:42.532 "dhgroup": "null" 00:22:42.532 } 00:22:42.532 } 00:22:42.532 ]' 00:22:42.532 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:42.792 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:42.792 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:42.792 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:42.792 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:42.792 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.792 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.792 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.050 23:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.618 23:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.876 00:22:43.877 23:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:43.877 23:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:43.877 23:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.135 23:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.135 23:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.135 23:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.135 23:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.135 23:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.135 23:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:44.135 { 00:22:44.135 "cntlid": 101, 00:22:44.135 "qid": 0, 00:22:44.135 "state": "enabled", 00:22:44.135 "thread": "nvmf_tgt_poll_group_000", 00:22:44.135 "listen_address": { 00:22:44.135 "trtype": "TCP", 00:22:44.135 "adrfam": "IPv4", 00:22:44.135 "traddr": "10.0.0.2", 00:22:44.135 "trsvcid": "4420" 00:22:44.135 }, 00:22:44.135 "peer_address": { 00:22:44.135 "trtype": "TCP", 00:22:44.135 "adrfam": "IPv4", 00:22:44.135 "traddr": "10.0.0.1", 00:22:44.135 "trsvcid": "43268" 00:22:44.135 }, 00:22:44.135 "auth": { 00:22:44.135 "state": "completed", 00:22:44.135 "digest": "sha512", 00:22:44.135 "dhgroup": "null" 00:22:44.135 } 00:22:44.135 } 00:22:44.135 ]' 00:22:44.135 23:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:44.135 23:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:44.135 23:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:44.135 23:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:44.135 23:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:44.135 23:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.135 23:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.135 23:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.394 23:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:22:44.957 23:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.957 23:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:44.957 23:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.957 23:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.957 23:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.957 23:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:44.957 23:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:44.957 23:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:45.215 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:22:45.215 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:45.215 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:45.215 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:45.215 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:45.215 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.216 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:45.216 23:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.216 23:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.216 23:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.216 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:45.216 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:45.474 00:22:45.474 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:45.474 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:45.474 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.733 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.733 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.733 23:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.733 23:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.733 23:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.733 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:45.733 { 00:22:45.733 "cntlid": 103, 00:22:45.733 "qid": 0, 00:22:45.733 "state": "enabled", 00:22:45.733 "thread": "nvmf_tgt_poll_group_000", 00:22:45.733 "listen_address": { 00:22:45.733 "trtype": "TCP", 00:22:45.733 "adrfam": "IPv4", 00:22:45.733 "traddr": "10.0.0.2", 00:22:45.733 "trsvcid": "4420" 00:22:45.733 }, 00:22:45.733 "peer_address": { 00:22:45.733 "trtype": "TCP", 00:22:45.733 "adrfam": "IPv4", 00:22:45.733 "traddr": "10.0.0.1", 00:22:45.733 "trsvcid": "43292" 00:22:45.733 }, 00:22:45.733 "auth": { 00:22:45.733 "state": "completed", 00:22:45.733 "digest": "sha512", 00:22:45.733 "dhgroup": "null" 00:22:45.733 } 00:22:45.733 } 00:22:45.733 ]' 00:22:45.733 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:45.733 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.733 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:45.733 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:45.733 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:45.733 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.733 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.733 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.992 23:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:22:46.560 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.560 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:46.560 23:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.560 23:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.560 23:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.560 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:46.560 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:46.560 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:46.560 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:46.820 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:22:46.820 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:46.820 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:46.820 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:46.820 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:46.820 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.820 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.820 23:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.820 23:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.820 23:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.820 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.820 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.079 00:22:47.079 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:47.079 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:47.079 23:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.079 23:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.079 23:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.079 23:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.079 23:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.079 23:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.079 23:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:47.079 { 00:22:47.079 "cntlid": 105, 00:22:47.079 "qid": 0, 00:22:47.079 "state": "enabled", 00:22:47.079 "thread": "nvmf_tgt_poll_group_000", 00:22:47.079 "listen_address": { 00:22:47.079 "trtype": "TCP", 00:22:47.079 "adrfam": "IPv4", 00:22:47.079 "traddr": "10.0.0.2", 00:22:47.079 "trsvcid": "4420" 00:22:47.080 }, 00:22:47.080 "peer_address": { 00:22:47.080 "trtype": "TCP", 00:22:47.080 "adrfam": "IPv4", 00:22:47.080 "traddr": "10.0.0.1", 00:22:47.080 "trsvcid": "43328" 00:22:47.080 }, 00:22:47.080 "auth": { 00:22:47.080 "state": "completed", 00:22:47.080 "digest": "sha512", 00:22:47.080 "dhgroup": "ffdhe2048" 00:22:47.080 } 00:22:47.080 } 00:22:47.080 ]' 00:22:47.080 23:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:47.080 23:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:47.080 23:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:47.339 23:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:47.339 23:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:47.339 23:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.339 23:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.339 23:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.339 23:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:22:47.907 23:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.166 23:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:48.166 23:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.166 23:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.166 23:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.166 23:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:48.166 23:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:48.166 23:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:48.166 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:22:48.166 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:48.166 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:48.166 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:48.166 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:48.166 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.166 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.166 23:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.166 23:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.166 23:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.166 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.166 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.424 00:22:48.424 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:48.424 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:48.424 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.683 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.683 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.683 23:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.683 23:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.683 23:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.683 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:48.683 { 00:22:48.683 "cntlid": 107, 00:22:48.683 "qid": 0, 00:22:48.683 "state": "enabled", 00:22:48.683 "thread": "nvmf_tgt_poll_group_000", 00:22:48.683 "listen_address": { 00:22:48.683 "trtype": "TCP", 00:22:48.683 "adrfam": "IPv4", 00:22:48.683 "traddr": "10.0.0.2", 00:22:48.683 "trsvcid": "4420" 00:22:48.683 }, 00:22:48.683 "peer_address": { 00:22:48.683 "trtype": "TCP", 00:22:48.683 "adrfam": "IPv4", 00:22:48.683 "traddr": "10.0.0.1", 00:22:48.683 "trsvcid": "43348" 00:22:48.683 }, 00:22:48.683 "auth": { 00:22:48.683 "state": "completed", 00:22:48.683 "digest": "sha512", 00:22:48.683 "dhgroup": "ffdhe2048" 00:22:48.683 } 00:22:48.683 } 00:22:48.683 ]' 00:22:48.683 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:48.683 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:48.683 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:48.683 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:48.683 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:48.683 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.683 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.683 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.942 23:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:22:49.510 23:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.510 23:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:49.510 23:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.510 23:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.510 23:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.510 23:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:49.510 23:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:49.510 23:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:49.770 23:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:22:49.770 23:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:49.770 23:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:49.770 23:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:49.770 23:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:49.770 23:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.770 23:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.770 23:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.770 23:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.770 23:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.770 23:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.770 23:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.030 00:22:50.030 23:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:50.030 23:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:50.030 23:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.288 23:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.288 23:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.289 23:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.289 23:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.289 23:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.289 23:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:50.289 { 00:22:50.289 "cntlid": 109, 00:22:50.289 "qid": 0, 00:22:50.289 "state": "enabled", 00:22:50.289 "thread": "nvmf_tgt_poll_group_000", 00:22:50.289 "listen_address": { 00:22:50.289 "trtype": "TCP", 00:22:50.289 "adrfam": "IPv4", 00:22:50.289 "traddr": "10.0.0.2", 00:22:50.289 "trsvcid": "4420" 00:22:50.289 }, 00:22:50.289 "peer_address": { 00:22:50.289 "trtype": "TCP", 00:22:50.289 "adrfam": "IPv4", 00:22:50.289 "traddr": "10.0.0.1", 00:22:50.289 "trsvcid": "43382" 00:22:50.289 }, 00:22:50.289 "auth": { 00:22:50.289 "state": "completed", 00:22:50.289 "digest": "sha512", 00:22:50.289 "dhgroup": "ffdhe2048" 00:22:50.289 } 00:22:50.289 } 00:22:50.289 ]' 00:22:50.289 23:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:50.289 23:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:50.289 23:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:50.289 23:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:50.289 23:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:50.289 23:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.289 23:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.289 23:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.548 23:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:22:51.116 23:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.116 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:51.116 23:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.116 23:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.116 23:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.116 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:51.116 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:51.116 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:51.376 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:22:51.376 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:51.376 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:51.376 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:51.376 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:51.376 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.376 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:51.376 23:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.376 23:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.376 23:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.376 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:51.376 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:51.376 00:22:51.636 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:51.636 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:51.636 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.636 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.636 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.636 23:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.636 23:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.636 23:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.636 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:51.636 { 00:22:51.636 "cntlid": 111, 00:22:51.636 "qid": 0, 00:22:51.636 "state": "enabled", 00:22:51.636 "thread": "nvmf_tgt_poll_group_000", 00:22:51.636 "listen_address": { 00:22:51.636 "trtype": "TCP", 00:22:51.636 "adrfam": "IPv4", 00:22:51.636 "traddr": "10.0.0.2", 00:22:51.636 "trsvcid": "4420" 00:22:51.636 }, 00:22:51.636 "peer_address": { 00:22:51.636 "trtype": "TCP", 00:22:51.636 "adrfam": "IPv4", 00:22:51.636 "traddr": "10.0.0.1", 00:22:51.636 "trsvcid": "58122" 00:22:51.636 }, 00:22:51.636 "auth": { 00:22:51.636 "state": "completed", 00:22:51.636 "digest": "sha512", 00:22:51.636 "dhgroup": "ffdhe2048" 00:22:51.636 } 00:22:51.636 } 00:22:51.636 ]' 00:22:51.636 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:51.636 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:51.636 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:51.895 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:51.895 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:51.895 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.895 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.895 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.895 23:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:22:52.462 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.462 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:52.462 23:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.462 23:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.721 23:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.721 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:52.721 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:52.721 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:52.721 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:52.721 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:22:52.721 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:52.721 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:52.721 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:52.721 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:52.721 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.721 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:52.721 23:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.721 23:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.721 23:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.721 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:52.721 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:52.980 00:22:52.980 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:52.980 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:52.980 23:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.239 23:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.239 23:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.239 23:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.239 23:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.239 23:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.239 23:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:53.239 { 00:22:53.239 "cntlid": 113, 00:22:53.239 "qid": 0, 00:22:53.239 "state": "enabled", 00:22:53.239 "thread": "nvmf_tgt_poll_group_000", 00:22:53.239 "listen_address": { 00:22:53.239 "trtype": "TCP", 00:22:53.239 "adrfam": "IPv4", 00:22:53.239 "traddr": "10.0.0.2", 00:22:53.239 "trsvcid": "4420" 00:22:53.239 }, 00:22:53.239 "peer_address": { 00:22:53.239 "trtype": "TCP", 00:22:53.239 "adrfam": "IPv4", 00:22:53.239 "traddr": "10.0.0.1", 00:22:53.239 "trsvcid": "58146" 00:22:53.239 }, 00:22:53.239 "auth": { 00:22:53.239 "state": "completed", 00:22:53.239 "digest": "sha512", 00:22:53.239 "dhgroup": "ffdhe3072" 00:22:53.239 } 00:22:53.239 } 00:22:53.239 ]' 00:22:53.239 23:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:53.239 23:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:53.239 23:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:53.239 23:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:53.239 23:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:53.239 23:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.239 23:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.239 23:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.498 23:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:22:54.066 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.066 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:54.066 23:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.066 23:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.066 23:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.066 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:54.066 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:54.066 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:54.326 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:22:54.326 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:54.326 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:54.326 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:54.326 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:54.326 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.326 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.326 23:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.326 23:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.326 23:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.326 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.326 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.585 00:22:54.585 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:54.585 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:54.585 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.844 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.844 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.844 23:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.844 23:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.844 23:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.844 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:54.844 { 00:22:54.844 "cntlid": 115, 00:22:54.844 "qid": 0, 00:22:54.844 "state": "enabled", 00:22:54.844 "thread": "nvmf_tgt_poll_group_000", 00:22:54.844 "listen_address": { 00:22:54.844 "trtype": "TCP", 00:22:54.844 "adrfam": "IPv4", 00:22:54.844 "traddr": "10.0.0.2", 00:22:54.844 "trsvcid": "4420" 00:22:54.844 }, 00:22:54.844 "peer_address": { 00:22:54.844 "trtype": "TCP", 00:22:54.844 "adrfam": "IPv4", 00:22:54.844 "traddr": "10.0.0.1", 00:22:54.844 "trsvcid": "58160" 00:22:54.844 }, 00:22:54.844 "auth": { 00:22:54.844 "state": "completed", 00:22:54.844 "digest": "sha512", 00:22:54.844 "dhgroup": "ffdhe3072" 00:22:54.844 } 00:22:54.844 } 00:22:54.844 ]' 00:22:54.844 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:54.844 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:54.844 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:54.844 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:54.844 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:54.844 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.844 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.844 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.105 23:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.716 23:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.976 00:22:55.976 23:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:55.976 23:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:55.976 23:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.235 23:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.235 23:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.235 23:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.235 23:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.235 23:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.235 23:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:56.235 { 00:22:56.235 "cntlid": 117, 00:22:56.235 "qid": 0, 00:22:56.235 "state": "enabled", 00:22:56.235 "thread": "nvmf_tgt_poll_group_000", 00:22:56.235 "listen_address": { 00:22:56.235 "trtype": "TCP", 00:22:56.235 "adrfam": "IPv4", 00:22:56.235 "traddr": "10.0.0.2", 00:22:56.235 "trsvcid": "4420" 00:22:56.235 }, 00:22:56.235 "peer_address": { 00:22:56.235 "trtype": "TCP", 00:22:56.235 "adrfam": "IPv4", 00:22:56.235 "traddr": "10.0.0.1", 00:22:56.235 "trsvcid": "58192" 00:22:56.235 }, 00:22:56.235 "auth": { 00:22:56.235 "state": "completed", 00:22:56.235 "digest": "sha512", 00:22:56.235 "dhgroup": "ffdhe3072" 00:22:56.235 } 00:22:56.235 } 00:22:56.235 ]' 00:22:56.235 23:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:56.235 23:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:56.235 23:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:56.235 23:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:56.235 23:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:56.494 23:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.494 23:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.494 23:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.494 23:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:22:57.062 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.062 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:57.062 23:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.062 23:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.062 23:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.062 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:57.062 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:57.062 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:57.321 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:22:57.321 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:57.321 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:57.321 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:57.321 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:57.321 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.321 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:57.321 23:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.321 23:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.321 23:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.321 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:57.321 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:57.580 00:22:57.580 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:57.580 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:57.580 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.839 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.839 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.839 23:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.839 23:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.839 23:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.839 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:57.839 { 00:22:57.839 "cntlid": 119, 00:22:57.839 "qid": 0, 00:22:57.839 "state": "enabled", 00:22:57.839 "thread": "nvmf_tgt_poll_group_000", 00:22:57.839 "listen_address": { 00:22:57.839 "trtype": "TCP", 00:22:57.839 "adrfam": "IPv4", 00:22:57.839 "traddr": "10.0.0.2", 00:22:57.839 "trsvcid": "4420" 00:22:57.839 }, 00:22:57.839 "peer_address": { 00:22:57.839 "trtype": "TCP", 00:22:57.839 "adrfam": "IPv4", 00:22:57.839 "traddr": "10.0.0.1", 00:22:57.839 "trsvcid": "58216" 00:22:57.839 }, 00:22:57.839 "auth": { 00:22:57.839 "state": "completed", 00:22:57.839 "digest": "sha512", 00:22:57.839 "dhgroup": "ffdhe3072" 00:22:57.839 } 00:22:57.839 } 00:22:57.839 ]' 00:22:57.839 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:57.839 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:57.839 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:57.839 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:57.839 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:57.839 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.839 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.839 23:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.098 23:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:22:58.667 23:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.667 23:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:58.667 23:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.667 23:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.667 23:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.667 23:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:58.667 23:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:58.667 23:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:58.667 23:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:58.926 23:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:22:58.926 23:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:58.926 23:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:58.926 23:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:58.926 23:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:58.926 23:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.927 23:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.927 23:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.927 23:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.927 23:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.927 23:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.927 23:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:59.186 00:22:59.186 23:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:59.186 23:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:59.186 23:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.446 23:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.446 23:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.446 23:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.446 23:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.446 23:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.446 23:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:59.446 { 00:22:59.446 "cntlid": 121, 00:22:59.446 "qid": 0, 00:22:59.446 "state": "enabled", 00:22:59.446 "thread": "nvmf_tgt_poll_group_000", 00:22:59.446 "listen_address": { 00:22:59.446 "trtype": "TCP", 00:22:59.446 "adrfam": "IPv4", 00:22:59.446 "traddr": "10.0.0.2", 00:22:59.446 "trsvcid": "4420" 00:22:59.446 }, 00:22:59.446 "peer_address": { 00:22:59.446 "trtype": "TCP", 00:22:59.446 "adrfam": "IPv4", 00:22:59.446 "traddr": "10.0.0.1", 00:22:59.446 "trsvcid": "58252" 00:22:59.446 }, 00:22:59.446 "auth": { 00:22:59.446 "state": "completed", 00:22:59.446 "digest": "sha512", 00:22:59.446 "dhgroup": "ffdhe4096" 00:22:59.446 } 00:22:59.446 } 00:22:59.446 ]' 00:22:59.446 23:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:59.446 23:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:59.446 23:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:59.446 23:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:59.446 23:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:59.446 23:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.446 23:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.446 23:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.705 23:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.274 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.533 00:23:00.793 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:00.793 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:00.793 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.793 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.793 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.793 23:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.793 23:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.793 23:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.793 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:00.793 { 00:23:00.793 "cntlid": 123, 00:23:00.793 "qid": 0, 00:23:00.793 "state": "enabled", 00:23:00.793 "thread": "nvmf_tgt_poll_group_000", 00:23:00.793 "listen_address": { 00:23:00.793 "trtype": "TCP", 00:23:00.793 "adrfam": "IPv4", 00:23:00.793 "traddr": "10.0.0.2", 00:23:00.793 "trsvcid": "4420" 00:23:00.793 }, 00:23:00.793 "peer_address": { 00:23:00.793 "trtype": "TCP", 00:23:00.793 "adrfam": "IPv4", 00:23:00.793 "traddr": "10.0.0.1", 00:23:00.793 "trsvcid": "58288" 00:23:00.793 }, 00:23:00.793 "auth": { 00:23:00.793 "state": "completed", 00:23:00.793 "digest": "sha512", 00:23:00.793 "dhgroup": "ffdhe4096" 00:23:00.793 } 00:23:00.793 } 00:23:00.793 ]' 00:23:00.793 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:00.793 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:00.793 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:01.051 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:01.051 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:01.051 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.051 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.051 23:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.309 23:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.875 23:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:02.133 00:23:02.133 23:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:02.133 23:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:02.133 23:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.392 23:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.392 23:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.392 23:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.392 23:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.392 23:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.392 23:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:02.392 { 00:23:02.392 "cntlid": 125, 00:23:02.392 "qid": 0, 00:23:02.392 "state": "enabled", 00:23:02.392 "thread": "nvmf_tgt_poll_group_000", 00:23:02.392 "listen_address": { 00:23:02.392 "trtype": "TCP", 00:23:02.392 "adrfam": "IPv4", 00:23:02.392 "traddr": "10.0.0.2", 00:23:02.392 "trsvcid": "4420" 00:23:02.392 }, 00:23:02.392 "peer_address": { 00:23:02.392 "trtype": "TCP", 00:23:02.392 "adrfam": "IPv4", 00:23:02.392 "traddr": "10.0.0.1", 00:23:02.392 "trsvcid": "60262" 00:23:02.392 }, 00:23:02.392 "auth": { 00:23:02.392 "state": "completed", 00:23:02.392 "digest": "sha512", 00:23:02.392 "dhgroup": "ffdhe4096" 00:23:02.392 } 00:23:02.392 } 00:23:02.392 ]' 00:23:02.392 23:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:02.392 23:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:02.392 23:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:02.392 23:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:02.392 23:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:02.392 23:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.392 23:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.392 23:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.743 23:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:23:03.308 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.308 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:03.308 23:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.308 23:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.308 23:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.308 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:03.308 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:03.308 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:03.566 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:23:03.566 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:03.566 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:03.566 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:03.566 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:03.566 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.566 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:23:03.566 23:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.566 23:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.566 23:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.566 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:03.566 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:03.824 00:23:03.824 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:03.824 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:03.824 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.824 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.824 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.824 23:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.824 23:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.082 23:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.082 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:04.082 { 00:23:04.082 "cntlid": 127, 00:23:04.082 "qid": 0, 00:23:04.082 "state": "enabled", 00:23:04.082 "thread": "nvmf_tgt_poll_group_000", 00:23:04.082 "listen_address": { 00:23:04.082 "trtype": "TCP", 00:23:04.082 "adrfam": "IPv4", 00:23:04.082 "traddr": "10.0.0.2", 00:23:04.082 "trsvcid": "4420" 00:23:04.082 }, 00:23:04.082 "peer_address": { 00:23:04.082 "trtype": "TCP", 00:23:04.082 "adrfam": "IPv4", 00:23:04.082 "traddr": "10.0.0.1", 00:23:04.082 "trsvcid": "60300" 00:23:04.082 }, 00:23:04.082 "auth": { 00:23:04.082 "state": "completed", 00:23:04.082 "digest": "sha512", 00:23:04.082 "dhgroup": "ffdhe4096" 00:23:04.082 } 00:23:04.082 } 00:23:04.082 ]' 00:23:04.082 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:04.082 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:04.082 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:04.082 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:04.082 23:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:04.082 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.082 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.082 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.341 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.907 23:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.474 00:23:05.474 23:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:05.474 23:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:05.474 23:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.474 23:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.474 23:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.474 23:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.474 23:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.474 23:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.474 23:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:05.474 { 00:23:05.474 "cntlid": 129, 00:23:05.474 "qid": 0, 00:23:05.474 "state": "enabled", 00:23:05.474 "thread": "nvmf_tgt_poll_group_000", 00:23:05.474 "listen_address": { 00:23:05.474 "trtype": "TCP", 00:23:05.474 "adrfam": "IPv4", 00:23:05.474 "traddr": "10.0.0.2", 00:23:05.474 "trsvcid": "4420" 00:23:05.474 }, 00:23:05.474 "peer_address": { 00:23:05.474 "trtype": "TCP", 00:23:05.474 "adrfam": "IPv4", 00:23:05.474 "traddr": "10.0.0.1", 00:23:05.474 "trsvcid": "60318" 00:23:05.474 }, 00:23:05.474 "auth": { 00:23:05.474 "state": "completed", 00:23:05.474 "digest": "sha512", 00:23:05.474 "dhgroup": "ffdhe6144" 00:23:05.474 } 00:23:05.474 } 00:23:05.474 ]' 00:23:05.474 23:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:05.474 23:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:05.474 23:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:05.474 23:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:05.732 23:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:05.732 23:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.732 23:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.732 23:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.733 23:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:23:06.299 23:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.299 23:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:06.299 23:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.299 23:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.299 23:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.299 23:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:06.299 23:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:06.299 23:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:06.558 23:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:23:06.558 23:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:06.558 23:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:06.558 23:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:06.558 23:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:06.558 23:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:06.558 23:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.558 23:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.558 23:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.558 23:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.558 23:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.558 23:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.817 00:23:07.076 23:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:07.076 23:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:07.076 23:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.076 23:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.076 23:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.076 23:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.076 23:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.076 23:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.076 23:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:07.076 { 00:23:07.076 "cntlid": 131, 00:23:07.076 "qid": 0, 00:23:07.076 "state": "enabled", 00:23:07.076 "thread": "nvmf_tgt_poll_group_000", 00:23:07.076 "listen_address": { 00:23:07.076 "trtype": "TCP", 00:23:07.076 "adrfam": "IPv4", 00:23:07.076 "traddr": "10.0.0.2", 00:23:07.076 "trsvcid": "4420" 00:23:07.076 }, 00:23:07.076 "peer_address": { 00:23:07.076 "trtype": "TCP", 00:23:07.076 "adrfam": "IPv4", 00:23:07.076 "traddr": "10.0.0.1", 00:23:07.076 "trsvcid": "60342" 00:23:07.076 }, 00:23:07.076 "auth": { 00:23:07.076 "state": "completed", 00:23:07.076 "digest": "sha512", 00:23:07.076 "dhgroup": "ffdhe6144" 00:23:07.076 } 00:23:07.076 } 00:23:07.076 ]' 00:23:07.076 23:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:07.076 23:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:07.076 23:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:07.335 23:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:07.335 23:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:07.335 23:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.335 23:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.335 23:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.335 23:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:23:07.903 23:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.903 23:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:07.903 23:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.903 23:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.903 23:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.903 23:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:07.903 23:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:07.903 23:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:08.162 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:23:08.162 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:08.162 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:08.162 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:08.162 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:08.162 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.162 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:08.162 23:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.162 23:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.162 23:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.162 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:08.162 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:08.421 00:23:08.421 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:08.421 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:08.421 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.680 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.680 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.680 23:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.680 23:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.680 23:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.680 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:08.680 { 00:23:08.680 "cntlid": 133, 00:23:08.680 "qid": 0, 00:23:08.680 "state": "enabled", 00:23:08.680 "thread": "nvmf_tgt_poll_group_000", 00:23:08.680 "listen_address": { 00:23:08.680 "trtype": "TCP", 00:23:08.680 "adrfam": "IPv4", 00:23:08.680 "traddr": "10.0.0.2", 00:23:08.680 "trsvcid": "4420" 00:23:08.680 }, 00:23:08.680 "peer_address": { 00:23:08.680 "trtype": "TCP", 00:23:08.680 "adrfam": "IPv4", 00:23:08.680 "traddr": "10.0.0.1", 00:23:08.680 "trsvcid": "60378" 00:23:08.680 }, 00:23:08.680 "auth": { 00:23:08.680 "state": "completed", 00:23:08.680 "digest": "sha512", 00:23:08.680 "dhgroup": "ffdhe6144" 00:23:08.680 } 00:23:08.680 } 00:23:08.680 ]' 00:23:08.680 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:08.680 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:08.680 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:08.680 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:08.680 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:08.939 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.939 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.939 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.939 23:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:23:09.505 23:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.505 23:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:09.505 23:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.505 23:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.505 23:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.505 23:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:09.505 23:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:09.505 23:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:09.786 23:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:23:09.786 23:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:09.786 23:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:09.786 23:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:09.786 23:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:09.786 23:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.786 23:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:23:09.786 23:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.786 23:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.786 23:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.786 23:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:09.786 23:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:10.046 00:23:10.046 23:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:10.046 23:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:10.046 23:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.305 23:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.305 23:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.305 23:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.305 23:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.305 23:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.305 23:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:10.305 { 00:23:10.305 "cntlid": 135, 00:23:10.305 "qid": 0, 00:23:10.305 "state": "enabled", 00:23:10.305 "thread": "nvmf_tgt_poll_group_000", 00:23:10.305 "listen_address": { 00:23:10.305 "trtype": "TCP", 00:23:10.305 "adrfam": "IPv4", 00:23:10.305 "traddr": "10.0.0.2", 00:23:10.305 "trsvcid": "4420" 00:23:10.305 }, 00:23:10.305 "peer_address": { 00:23:10.305 "trtype": "TCP", 00:23:10.305 "adrfam": "IPv4", 00:23:10.305 "traddr": "10.0.0.1", 00:23:10.305 "trsvcid": "60398" 00:23:10.305 }, 00:23:10.305 "auth": { 00:23:10.305 "state": "completed", 00:23:10.305 "digest": "sha512", 00:23:10.305 "dhgroup": "ffdhe6144" 00:23:10.305 } 00:23:10.305 } 00:23:10.305 ]' 00:23:10.305 23:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:10.305 23:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:10.305 23:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:10.305 23:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:10.305 23:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:10.564 23:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.564 23:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.564 23:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.564 23:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:23:11.133 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.133 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:11.133 23:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.133 23:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.133 23:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.133 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:11.133 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:11.133 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:11.133 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:11.392 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:23:11.392 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:11.392 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:11.392 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:11.392 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:11.392 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.392 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.392 23:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.392 23:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.392 23:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.392 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.392 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.962 00:23:11.962 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:11.962 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:11.962 23:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.962 23:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.962 23:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.962 23:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.962 23:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.963 23:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.963 23:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:11.963 { 00:23:11.963 "cntlid": 137, 00:23:11.963 "qid": 0, 00:23:11.963 "state": "enabled", 00:23:11.963 "thread": "nvmf_tgt_poll_group_000", 00:23:11.963 "listen_address": { 00:23:11.963 "trtype": "TCP", 00:23:11.963 "adrfam": "IPv4", 00:23:11.963 "traddr": "10.0.0.2", 00:23:11.963 "trsvcid": "4420" 00:23:11.963 }, 00:23:11.963 "peer_address": { 00:23:11.963 "trtype": "TCP", 00:23:11.963 "adrfam": "IPv4", 00:23:11.963 "traddr": "10.0.0.1", 00:23:11.963 "trsvcid": "42500" 00:23:11.963 }, 00:23:11.963 "auth": { 00:23:11.963 "state": "completed", 00:23:11.963 "digest": "sha512", 00:23:11.963 "dhgroup": "ffdhe8192" 00:23:11.963 } 00:23:11.963 } 00:23:11.963 ]' 00:23:11.963 23:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:12.222 23:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:12.222 23:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:12.222 23:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:12.222 23:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:12.222 23:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.222 23:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.222 23:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.481 23:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:23:13.050 23:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.050 23:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:13.050 23:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.050 23:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.050 23:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.050 23:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:13.050 23:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:13.050 23:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:13.050 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:23:13.050 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:13.050 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:13.050 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:13.050 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:13.050 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.050 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.050 23:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.051 23:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.051 23:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.051 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.051 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.619 00:23:13.619 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:13.619 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:13.619 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.879 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.879 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.879 23:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.879 23:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.879 23:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.879 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:13.879 { 00:23:13.879 "cntlid": 139, 00:23:13.879 "qid": 0, 00:23:13.879 "state": "enabled", 00:23:13.879 "thread": "nvmf_tgt_poll_group_000", 00:23:13.879 "listen_address": { 00:23:13.879 "trtype": "TCP", 00:23:13.879 "adrfam": "IPv4", 00:23:13.879 "traddr": "10.0.0.2", 00:23:13.879 "trsvcid": "4420" 00:23:13.879 }, 00:23:13.879 "peer_address": { 00:23:13.879 "trtype": "TCP", 00:23:13.879 "adrfam": "IPv4", 00:23:13.879 "traddr": "10.0.0.1", 00:23:13.879 "trsvcid": "42522" 00:23:13.879 }, 00:23:13.879 "auth": { 00:23:13.879 "state": "completed", 00:23:13.879 "digest": "sha512", 00:23:13.879 "dhgroup": "ffdhe8192" 00:23:13.879 } 00:23:13.879 } 00:23:13.879 ]' 00:23:13.879 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:13.879 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:13.879 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:13.879 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:13.879 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:13.879 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.879 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.879 23:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.138 23:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MmQwMDVlM2IwNjRiNjcxYzExZWEzYjEyNGQwNzA5ZDJJFnLN: --dhchap-ctrl-secret DHHC-1:02:ODMzNmMxMmZlODk3ZmNkZTk4YmU4NWM2ZjQ0MjhjODVhMzhiZmYwOTc0MDAxNmQyBcoZTw==: 00:23:14.708 23:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.708 23:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:14.708 23:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.708 23:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.708 23:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.708 23:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:14.708 23:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:14.708 23:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:14.967 23:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:23:14.967 23:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:14.967 23:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:14.967 23:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:14.967 23:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:14.967 23:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.967 23:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.967 23:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.967 23:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.967 23:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.967 23:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.967 23:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.534 00:23:15.534 23:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:15.534 23:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:15.534 23:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.534 23:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.534 23:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.534 23:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.534 23:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.534 23:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.534 23:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:15.534 { 00:23:15.534 "cntlid": 141, 00:23:15.534 "qid": 0, 00:23:15.534 "state": "enabled", 00:23:15.534 "thread": "nvmf_tgt_poll_group_000", 00:23:15.534 "listen_address": { 00:23:15.534 "trtype": "TCP", 00:23:15.534 "adrfam": "IPv4", 00:23:15.534 "traddr": "10.0.0.2", 00:23:15.534 "trsvcid": "4420" 00:23:15.534 }, 00:23:15.534 "peer_address": { 00:23:15.534 "trtype": "TCP", 00:23:15.534 "adrfam": "IPv4", 00:23:15.534 "traddr": "10.0.0.1", 00:23:15.534 "trsvcid": "42544" 00:23:15.534 }, 00:23:15.534 "auth": { 00:23:15.534 "state": "completed", 00:23:15.534 "digest": "sha512", 00:23:15.534 "dhgroup": "ffdhe8192" 00:23:15.534 } 00:23:15.534 } 00:23:15.534 ]' 00:23:15.534 23:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:15.534 23:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:15.534 23:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:15.534 23:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:15.534 23:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:15.793 23:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.793 23:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.793 23:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.793 23:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDM1OTUyZmJjMzA3MTg1NTRmYWU0Y2E5MjI0MDdkODAwZmM5MmU1NDZlMDM0ZjMxfrHRHg==: --dhchap-ctrl-secret DHHC-1:01:MjVjYTE0ZWU4YmE3OWE3MGVhMzkwNGEzZWZlOTBjNDHJ/j4K: 00:23:16.359 23:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.359 23:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:16.359 23:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.359 23:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.359 23:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.359 23:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:16.359 23:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:16.359 23:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:16.619 23:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:23:16.619 23:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:16.619 23:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:16.619 23:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:16.619 23:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:16.619 23:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.619 23:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:23:16.619 23:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.619 23:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.619 23:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.619 23:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:16.619 23:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:17.187 00:23:17.187 23:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:17.187 23:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:17.187 23:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.446 23:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.446 23:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.446 23:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.446 23:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.446 23:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.446 23:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:17.446 { 00:23:17.446 "cntlid": 143, 00:23:17.446 "qid": 0, 00:23:17.446 "state": "enabled", 00:23:17.447 "thread": "nvmf_tgt_poll_group_000", 00:23:17.447 "listen_address": { 00:23:17.447 "trtype": "TCP", 00:23:17.447 "adrfam": "IPv4", 00:23:17.447 "traddr": "10.0.0.2", 00:23:17.447 "trsvcid": "4420" 00:23:17.447 }, 00:23:17.447 "peer_address": { 00:23:17.447 "trtype": "TCP", 00:23:17.447 "adrfam": "IPv4", 00:23:17.447 "traddr": "10.0.0.1", 00:23:17.447 "trsvcid": "42566" 00:23:17.447 }, 00:23:17.447 "auth": { 00:23:17.447 "state": "completed", 00:23:17.447 "digest": "sha512", 00:23:17.447 "dhgroup": "ffdhe8192" 00:23:17.447 } 00:23:17.447 } 00:23:17.447 ]' 00:23:17.447 23:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:17.447 23:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:17.447 23:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:17.447 23:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:17.447 23:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:17.447 23:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.447 23:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.447 23:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.706 23:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.275 23:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.534 23:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.534 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.534 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.792 00:23:18.792 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:18.792 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:18.792 23:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.051 23:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.051 23:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.051 23:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.051 23:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.051 23:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.051 23:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:19.051 { 00:23:19.051 "cntlid": 145, 00:23:19.051 "qid": 0, 00:23:19.052 "state": "enabled", 00:23:19.052 "thread": "nvmf_tgt_poll_group_000", 00:23:19.052 "listen_address": { 00:23:19.052 "trtype": "TCP", 00:23:19.052 "adrfam": "IPv4", 00:23:19.052 "traddr": "10.0.0.2", 00:23:19.052 "trsvcid": "4420" 00:23:19.052 }, 00:23:19.052 "peer_address": { 00:23:19.052 "trtype": "TCP", 00:23:19.052 "adrfam": "IPv4", 00:23:19.052 "traddr": "10.0.0.1", 00:23:19.052 "trsvcid": "42606" 00:23:19.052 }, 00:23:19.052 "auth": { 00:23:19.052 "state": "completed", 00:23:19.052 "digest": "sha512", 00:23:19.052 "dhgroup": "ffdhe8192" 00:23:19.052 } 00:23:19.052 } 00:23:19.052 ]' 00:23:19.052 23:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:19.052 23:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:19.052 23:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:19.052 23:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:19.052 23:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:19.311 23:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.311 23:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.311 23:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.311 23:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NWRjZTIxNzkxNDcxZDRlMmY0MWEyMzgwZWRkNTk1NWM0MjExMzdkYzczZjYzODY1vquuzA==: --dhchap-ctrl-secret DHHC-1:03:ZjA4Mzg0ZWU3NmU3ODJhNDE0OTgyYzg2Y2I4OGZiZTlhNDI3YzU2MDYzZGI0YTI5YWM5YjQwMzc3YzJhMjM4MGC56Do=: 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:19.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:19.879 23:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:20.447 request: 00:23:20.447 { 00:23:20.447 "name": "nvme0", 00:23:20.447 "trtype": "tcp", 00:23:20.447 "traddr": "10.0.0.2", 00:23:20.447 "adrfam": "ipv4", 00:23:20.447 "trsvcid": "4420", 00:23:20.447 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:20.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:20.447 "prchk_reftag": false, 00:23:20.447 "prchk_guard": false, 00:23:20.447 "hdgst": false, 00:23:20.447 "ddgst": false, 00:23:20.447 "dhchap_key": "key2", 00:23:20.447 "method": "bdev_nvme_attach_controller", 00:23:20.447 "req_id": 1 00:23:20.447 } 00:23:20.447 Got JSON-RPC error response 00:23:20.447 response: 00:23:20.447 { 00:23:20.447 "code": -5, 00:23:20.447 "message": "Input/output error" 00:23:20.447 } 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:20.447 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.448 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:20.448 23:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:20.707 request: 00:23:20.707 { 00:23:20.707 "name": "nvme0", 00:23:20.707 "trtype": "tcp", 00:23:20.707 "traddr": "10.0.0.2", 00:23:20.707 "adrfam": "ipv4", 00:23:20.707 "trsvcid": "4420", 00:23:20.707 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:20.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:20.707 "prchk_reftag": false, 00:23:20.707 "prchk_guard": false, 00:23:20.707 "hdgst": false, 00:23:20.707 "ddgst": false, 00:23:20.707 "dhchap_key": "key1", 00:23:20.707 "dhchap_ctrlr_key": "ckey2", 00:23:20.707 "method": "bdev_nvme_attach_controller", 00:23:20.707 "req_id": 1 00:23:20.707 } 00:23:20.707 Got JSON-RPC error response 00:23:20.707 response: 00:23:20.707 { 00:23:20.707 "code": -5, 00:23:20.707 "message": "Input/output error" 00:23:20.707 } 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.966 23:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.225 request: 00:23:21.225 { 00:23:21.225 "name": "nvme0", 00:23:21.225 "trtype": "tcp", 00:23:21.225 "traddr": "10.0.0.2", 00:23:21.225 "adrfam": "ipv4", 00:23:21.225 "trsvcid": "4420", 00:23:21.225 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:21.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:21.225 "prchk_reftag": false, 00:23:21.225 "prchk_guard": false, 00:23:21.225 "hdgst": false, 00:23:21.225 "ddgst": false, 00:23:21.225 "dhchap_key": "key1", 00:23:21.225 "dhchap_ctrlr_key": "ckey1", 00:23:21.225 "method": "bdev_nvme_attach_controller", 00:23:21.225 "req_id": 1 00:23:21.225 } 00:23:21.225 Got JSON-RPC error response 00:23:21.225 response: 00:23:21.225 { 00:23:21.225 "code": -5, 00:23:21.225 "message": "Input/output error" 00:23:21.225 } 00:23:21.225 23:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:21.225 23:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:21.225 23:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:21.225 23:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:21.225 23:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:21.225 23:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.225 23:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.225 23:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.225 23:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2444802 00:23:21.225 23:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2444802 ']' 00:23:21.226 23:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2444802 00:23:21.226 23:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:21.226 23:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.226 23:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2444802 00:23:21.485 23:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:21.485 23:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:21.485 23:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2444802' 00:23:21.485 killing process with pid 2444802 00:23:21.485 23:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2444802 00:23:21.485 23:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2444802 00:23:22.863 23:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:22.863 23:27:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:22.864 23:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:22.864 23:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.864 23:27:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2465936 00:23:22.864 23:27:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2465936 00:23:22.864 23:27:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:22.864 23:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2465936 ']' 00:23:22.864 23:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.864 23:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:22.864 23:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.864 23:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:22.864 23:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.431 23:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.431 23:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:23:23.431 23:27:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:23.431 23:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:23.431 23:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.690 23:27:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.690 23:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:23.691 23:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2465936 00:23:23.691 23:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2465936 ']' 00:23:23.691 23:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.691 23:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.691 23:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.691 23:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.691 23:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.691 23:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.691 23:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:23:23.691 23:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:23:23.691 23:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.691 23:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.260 23:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.260 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:23:24.260 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:24.260 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:24.260 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:24.260 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:24.260 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:24.260 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:23:24.260 23:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.260 23:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.260 23:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.260 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:24.260 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:24.827 00:23:24.827 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:24.827 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:24.827 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.827 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.827 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.827 23:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.827 23:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.827 23:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.827 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:24.827 { 00:23:24.827 "cntlid": 1, 00:23:24.827 "qid": 0, 00:23:24.827 "state": "enabled", 00:23:24.827 "thread": "nvmf_tgt_poll_group_000", 00:23:24.827 "listen_address": { 00:23:24.827 "trtype": "TCP", 00:23:24.827 "adrfam": "IPv4", 00:23:24.827 "traddr": "10.0.0.2", 00:23:24.827 "trsvcid": "4420" 00:23:24.827 }, 00:23:24.827 "peer_address": { 00:23:24.827 "trtype": "TCP", 00:23:24.827 "adrfam": "IPv4", 00:23:24.827 "traddr": "10.0.0.1", 00:23:24.827 "trsvcid": "50006" 00:23:24.827 }, 00:23:24.827 "auth": { 00:23:24.827 "state": "completed", 00:23:24.827 "digest": "sha512", 00:23:24.827 "dhgroup": "ffdhe8192" 00:23:24.827 } 00:23:24.827 } 00:23:24.827 ]' 00:23:24.827 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:24.827 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:24.827 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:25.086 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:25.086 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:25.086 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:25.086 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:25.086 23:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:25.086 23:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVmNTRiMDMwNWM4NmFjOTAyMjNiOGJjNDAyMGEyMDVmODQ1NzAyNjM3MzgyY2ViNWVmODI0NjUxMWE0ZGEzY6+OIoE=: 00:23:25.655 23:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:25.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:25.655 23:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:25.655 23:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.655 23:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.655 23:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.655 23:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:23:25.655 23:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.655 23:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.655 23:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.655 23:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:25.655 23:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:25.914 23:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:25.914 23:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:25.914 23:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:25.914 23:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:25.914 23:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.914 23:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:25.914 23:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.914 23:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:25.914 23:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:26.173 request: 00:23:26.174 { 00:23:26.174 "name": "nvme0", 00:23:26.174 "trtype": "tcp", 00:23:26.174 "traddr": "10.0.0.2", 00:23:26.174 "adrfam": "ipv4", 00:23:26.174 "trsvcid": "4420", 00:23:26.174 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:26.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:26.174 "prchk_reftag": false, 00:23:26.174 "prchk_guard": false, 00:23:26.174 "hdgst": false, 00:23:26.174 "ddgst": false, 00:23:26.174 "dhchap_key": "key3", 00:23:26.174 "method": "bdev_nvme_attach_controller", 00:23:26.174 "req_id": 1 00:23:26.174 } 00:23:26.174 Got JSON-RPC error response 00:23:26.174 response: 00:23:26.174 { 00:23:26.174 "code": -5, 00:23:26.174 "message": "Input/output error" 00:23:26.174 } 00:23:26.174 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:26.174 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:26.174 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:26.174 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:26.174 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:23:26.174 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:23:26.174 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:26.174 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:26.433 request: 00:23:26.433 { 00:23:26.433 "name": "nvme0", 00:23:26.433 "trtype": "tcp", 00:23:26.433 "traddr": "10.0.0.2", 00:23:26.433 "adrfam": "ipv4", 00:23:26.433 "trsvcid": "4420", 00:23:26.433 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:26.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:26.433 "prchk_reftag": false, 00:23:26.433 "prchk_guard": false, 00:23:26.433 "hdgst": false, 00:23:26.433 "ddgst": false, 00:23:26.433 "dhchap_key": "key3", 00:23:26.433 "method": "bdev_nvme_attach_controller", 00:23:26.433 "req_id": 1 00:23:26.433 } 00:23:26.433 Got JSON-RPC error response 00:23:26.433 response: 00:23:26.433 { 00:23:26.433 "code": -5, 00:23:26.433 "message": "Input/output error" 00:23:26.433 } 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:26.433 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:26.693 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:26.693 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.693 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.693 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.693 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:26.693 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.693 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.693 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.693 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:26.693 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:26.693 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:26.693 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:26.693 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:26.693 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:26.693 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:26.693 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:26.693 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:26.965 request: 00:23:26.965 { 00:23:26.965 "name": "nvme0", 00:23:26.966 "trtype": "tcp", 00:23:26.966 "traddr": "10.0.0.2", 00:23:26.966 "adrfam": "ipv4", 00:23:26.966 "trsvcid": "4420", 00:23:26.966 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:26.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:26.966 "prchk_reftag": false, 00:23:26.966 "prchk_guard": false, 00:23:26.966 "hdgst": false, 00:23:26.966 "ddgst": false, 00:23:26.966 "dhchap_key": "key0", 00:23:26.966 "dhchap_ctrlr_key": "key1", 00:23:26.966 "method": "bdev_nvme_attach_controller", 00:23:26.966 "req_id": 1 00:23:26.966 } 00:23:26.966 Got JSON-RPC error response 00:23:26.966 response: 00:23:26.966 { 00:23:26.966 "code": -5, 00:23:26.966 "message": "Input/output error" 00:23:26.966 } 00:23:26.966 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:26.966 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:26.966 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:26.966 23:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:26.966 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:26.966 23:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:27.224 00:23:27.224 23:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:23:27.224 23:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:23:27.224 23:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.224 23:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.224 23:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:27.224 23:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.482 23:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:23:27.482 23:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:23:27.482 23:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2444931 00:23:27.482 23:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2444931 ']' 00:23:27.482 23:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2444931 00:23:27.482 23:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:27.482 23:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.482 23:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2444931 00:23:27.482 23:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:27.482 23:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:27.483 23:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2444931' 00:23:27.483 killing process with pid 2444931 00:23:27.483 23:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2444931 00:23:27.483 23:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2444931 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:30.079 rmmod nvme_tcp 00:23:30.079 rmmod nvme_fabrics 00:23:30.079 rmmod nvme_keyring 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2465936 ']' 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2465936 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2465936 ']' 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2465936 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2465936 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2465936' 00:23:30.079 killing process with pid 2465936 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2465936 00:23:30.079 23:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2465936 00:23:31.454 23:27:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:31.454 23:27:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:31.454 23:27:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:31.454 23:27:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:31.454 23:27:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:31.454 23:27:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.454 23:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.454 23:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.367 23:27:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:33.367 23:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.uXn /tmp/spdk.key-sha256.IHN /tmp/spdk.key-sha384.LnY /tmp/spdk.key-sha512.F1K /tmp/spdk.key-sha512.8pe /tmp/spdk.key-sha384.plZ /tmp/spdk.key-sha256.e0X '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:33.367 00:23:33.367 real 2m16.698s 00:23:33.367 user 5m10.662s 00:23:33.367 sys 0m20.717s 00:23:33.367 23:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:33.367 23:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.367 ************************************ 00:23:33.367 END TEST nvmf_auth_target 00:23:33.367 ************************************ 00:23:33.367 23:27:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:33.367 23:27:42 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:23:33.367 23:27:42 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:33.367 23:27:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:23:33.367 23:27:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:33.367 23:27:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:33.367 ************************************ 00:23:33.367 START TEST nvmf_bdevio_no_huge 00:23:33.367 ************************************ 00:23:33.367 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:33.626 * Looking for test storage... 00:23:33.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:33.626 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.626 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:33.626 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.626 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.626 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.626 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.626 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.626 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.626 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.626 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.626 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.626 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.626 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:33.626 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:33.626 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.626 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:23:33.627 23:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:38.901 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:38.901 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:38.901 Found net devices under 0000:86:00.0: cvl_0_0 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:38.901 Found net devices under 0000:86:00.1: cvl_0_1 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:38.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:23:38.901 00:23:38.901 --- 10.0.0.2 ping statistics --- 00:23:38.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.901 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:23:38.901 00:23:38.901 --- 10.0.0.1 ping statistics --- 00:23:38.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.901 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2470788 00:23:38.901 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:38.902 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2470788 00:23:38.902 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2470788 ']' 00:23:38.902 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.902 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:38.902 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.902 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:38.902 23:27:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:38.902 [2024-07-10 23:27:47.856122] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:23:38.902 [2024-07-10 23:27:47.856244] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:39.161 [2024-07-10 23:27:47.984065] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.161 [2024-07-10 23:27:48.211842] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.161 [2024-07-10 23:27:48.211885] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.161 [2024-07-10 23:27:48.211897] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.161 [2024-07-10 23:27:48.211905] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.161 [2024-07-10 23:27:48.211914] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.161 [2024-07-10 23:27:48.212070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:39.161 [2024-07-10 23:27:48.212221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.161 [2024-07-10 23:27:48.212158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:23:39.161 [2024-07-10 23:27:48.212245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:39.732 [2024-07-10 23:27:48.678063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:39.732 Malloc0 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:39.732 [2024-07-10 23:27:48.787466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:39.732 { 00:23:39.732 "params": { 00:23:39.732 "name": "Nvme$subsystem", 00:23:39.732 "trtype": "$TEST_TRANSPORT", 00:23:39.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:39.732 "adrfam": "ipv4", 00:23:39.732 "trsvcid": "$NVMF_PORT", 00:23:39.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:39.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:39.732 "hdgst": ${hdgst:-false}, 00:23:39.732 "ddgst": ${ddgst:-false} 00:23:39.732 }, 00:23:39.732 "method": "bdev_nvme_attach_controller" 00:23:39.732 } 00:23:39.732 EOF 00:23:39.732 )") 00:23:39.732 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:23:39.992 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:23:39.992 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:23:39.992 23:27:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:39.992 "params": { 00:23:39.992 "name": "Nvme1", 00:23:39.992 "trtype": "tcp", 00:23:39.992 "traddr": "10.0.0.2", 00:23:39.992 "adrfam": "ipv4", 00:23:39.992 "trsvcid": "4420", 00:23:39.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:39.992 "hdgst": false, 00:23:39.992 "ddgst": false 00:23:39.992 }, 00:23:39.992 "method": "bdev_nvme_attach_controller" 00:23:39.992 }' 00:23:39.992 [2024-07-10 23:27:48.860706] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:23:39.992 [2024-07-10 23:27:48.860793] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2470909 ] 00:23:39.992 [2024-07-10 23:27:48.980619] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:40.251 [2024-07-10 23:27:49.226094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.251 [2024-07-10 23:27:49.226102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.251 [2024-07-10 23:27:49.226109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.820 I/O targets: 00:23:40.820 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:40.820 00:23:40.820 00:23:40.820 CUnit - A unit testing framework for C - Version 2.1-3 00:23:40.820 http://cunit.sourceforge.net/ 00:23:40.820 00:23:40.820 00:23:40.820 Suite: bdevio tests on: Nvme1n1 00:23:40.820 Test: blockdev write read block ...passed 00:23:40.820 Test: blockdev write zeroes read block ...passed 00:23:40.820 Test: blockdev write zeroes read no split ...passed 00:23:40.820 Test: blockdev write zeroes read split ...passed 00:23:41.079 Test: blockdev write zeroes read split partial ...passed 00:23:41.079 Test: blockdev reset ...[2024-07-10 23:27:49.911882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:41.079 [2024-07-10 23:27:49.911990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032b200 (9): Bad file descriptor 00:23:41.079 [2024-07-10 23:27:50.008129] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:41.079 passed 00:23:41.079 Test: blockdev write read 8 blocks ...passed 00:23:41.079 Test: blockdev write read size > 128k ...passed 00:23:41.079 Test: blockdev write read invalid size ...passed 00:23:41.079 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:41.079 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:41.079 Test: blockdev write read max offset ...passed 00:23:41.339 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:41.339 Test: blockdev writev readv 8 blocks ...passed 00:23:41.339 Test: blockdev writev readv 30 x 1block ...passed 00:23:41.339 Test: blockdev writev readv block ...passed 00:23:41.339 Test: blockdev writev readv size > 128k ...passed 00:23:41.339 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:41.339 Test: blockdev comparev and writev ...[2024-07-10 23:27:50.227132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:41.339 [2024-07-10 23:27:50.227193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.339 [2024-07-10 23:27:50.227214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:41.339 [2024-07-10 23:27:50.227226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:41.339 [2024-07-10 23:27:50.227550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:41.339 [2024-07-10 23:27:50.227569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:41.339 [2024-07-10 23:27:50.227586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:41.339 [2024-07-10 23:27:50.227596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:41.339 [2024-07-10 23:27:50.227898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:41.339 [2024-07-10 23:27:50.227914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:41.339 [2024-07-10 23:27:50.227934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:41.339 [2024-07-10 23:27:50.227944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:41.339 [2024-07-10 23:27:50.228259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:41.339 [2024-07-10 23:27:50.228276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:41.339 [2024-07-10 23:27:50.228292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:41.339 [2024-07-10 23:27:50.228307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:41.339 passed 00:23:41.339 Test: blockdev nvme passthru rw ...passed 00:23:41.339 Test: blockdev nvme passthru vendor specific ...[2024-07-10 23:27:50.310630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:41.339 [2024-07-10 23:27:50.310663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:41.339 [2024-07-10 23:27:50.310823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:41.339 [2024-07-10 23:27:50.310838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:41.339 [2024-07-10 23:27:50.310980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:41.339 [2024-07-10 23:27:50.310993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:41.339 [2024-07-10 23:27:50.311148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:41.339 [2024-07-10 23:27:50.311166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:41.339 passed 00:23:41.339 Test: blockdev nvme admin passthru ...passed 00:23:41.339 Test: blockdev copy ...passed 00:23:41.339 00:23:41.339 Run Summary: Type Total Ran Passed Failed Inactive 00:23:41.339 suites 1 1 n/a 0 0 00:23:41.339 tests 23 23 23 0 0 00:23:41.339 asserts 152 152 152 0 n/a 00:23:41.339 00:23:41.339 Elapsed time = 1.422 seconds 00:23:42.278 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:42.278 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.278 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:42.278 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.278 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:42.278 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:42.278 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:42.278 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:23:42.278 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:42.278 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:23:42.278 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:42.278 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:42.278 rmmod nvme_tcp 00:23:42.278 rmmod nvme_fabrics 00:23:42.278 rmmod nvme_keyring 00:23:42.278 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:42.278 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:23:42.279 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:23:42.279 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2470788 ']' 00:23:42.279 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2470788 00:23:42.279 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2470788 ']' 00:23:42.279 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2470788 00:23:42.279 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:23:42.279 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:42.279 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2470788 00:23:42.279 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:23:42.279 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:23:42.279 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2470788' 00:23:42.279 killing process with pid 2470788 00:23:42.279 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2470788 00:23:42.279 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2470788 00:23:43.216 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:43.216 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:43.216 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:43.216 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:43.216 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:43.216 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.216 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.216 23:27:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.123 23:27:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:45.123 00:23:45.123 real 0m11.604s 00:23:45.123 user 0m20.080s 00:23:45.123 sys 0m4.996s 00:23:45.123 23:27:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:45.123 23:27:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:45.123 ************************************ 00:23:45.123 END TEST nvmf_bdevio_no_huge 00:23:45.123 ************************************ 00:23:45.123 23:27:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:45.123 23:27:54 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:45.123 23:27:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:45.123 23:27:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:45.123 23:27:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:45.123 ************************************ 00:23:45.123 START TEST nvmf_tls 00:23:45.123 ************************************ 00:23:45.123 23:27:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:45.123 * Looking for test storage... 00:23:45.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:45.123 23:27:54 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.123 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:45.123 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.123 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.123 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.123 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.123 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.123 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.123 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.123 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.123 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.123 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:23:45.382 23:27:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:50.656 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:50.656 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:50.656 Found net devices under 0000:86:00.0: cvl_0_0 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:50.656 Found net devices under 0000:86:00.1: cvl_0_1 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.656 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:50.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:23:50.657 00:23:50.657 --- 10.0.0.2 ping statistics --- 00:23:50.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.657 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:23:50.657 00:23:50.657 --- 10.0.0.1 ping statistics --- 00:23:50.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.657 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2474888 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2474888 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2474888 ']' 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:50.657 23:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.916 [2024-07-10 23:27:59.724233] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:23:50.916 [2024-07-10 23:27:59.724315] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.916 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.916 [2024-07-10 23:27:59.834285] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.174 [2024-07-10 23:28:00.046905] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.174 [2024-07-10 23:28:00.046954] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.174 [2024-07-10 23:28:00.046967] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.174 [2024-07-10 23:28:00.046978] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.174 [2024-07-10 23:28:00.046987] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.174 [2024-07-10 23:28:00.047015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.433 23:28:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:51.433 23:28:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:51.433 23:28:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:51.433 23:28:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:51.433 23:28:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.692 23:28:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.692 23:28:00 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:23:51.692 23:28:00 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:51.692 true 00:23:51.692 23:28:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:51.692 23:28:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:23:51.950 23:28:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:23:51.950 23:28:00 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:23:51.951 23:28:00 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:52.208 23:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:52.208 23:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:23:52.208 23:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:23:52.208 23:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:52.208 23:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:52.466 23:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:52.466 23:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:23:52.724 23:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:23:52.724 23:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:52.724 23:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:52.724 23:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:52.724 23:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:23:52.724 23:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:52.724 23:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:52.982 23:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:52.982 23:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:53.240 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:23:53.240 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:53.240 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:53.240 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:53.240 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.WDRewWVc2O 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.uhWZ4QRX9w 00:23:53.498 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:53.499 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:53.499 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.WDRewWVc2O 00:23:53.499 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.uhWZ4QRX9w 00:23:53.499 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:53.757 23:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:54.361 23:28:03 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.WDRewWVc2O 00:23:54.361 23:28:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WDRewWVc2O 00:23:54.361 23:28:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:54.361 [2024-07-10 23:28:03.391376] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.361 23:28:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:54.621 23:28:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:54.879 [2024-07-10 23:28:03.704190] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:54.879 [2024-07-10 23:28:03.704445] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.880 23:28:03 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:54.880 malloc0 00:23:54.880 23:28:03 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:55.138 23:28:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WDRewWVc2O 00:23:55.396 [2024-07-10 23:28:04.231404] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:55.396 23:28:04 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.WDRewWVc2O 00:23:55.396 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.381 Initializing NVMe Controllers 00:24:05.381 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:05.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:05.381 Initialization complete. Launching workers. 00:24:05.381 ======================================================== 00:24:05.381 Latency(us) 00:24:05.381 Device Information : IOPS MiB/s Average min max 00:24:05.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12786.78 49.95 5005.91 1313.56 7596.85 00:24:05.381 ======================================================== 00:24:05.381 Total : 12786.78 49.95 5005.91 1313.56 7596.85 00:24:05.381 00:24:05.641 23:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WDRewWVc2O 00:24:05.641 23:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:05.641 23:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:05.641 23:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:05.641 23:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WDRewWVc2O' 00:24:05.641 23:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:05.641 23:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2477247 00:24:05.641 23:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:05.641 23:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:05.641 23:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2477247 /var/tmp/bdevperf.sock 00:24:05.641 23:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2477247 ']' 00:24:05.641 23:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:05.641 23:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.641 23:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:05.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:05.641 23:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.641 23:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.641 [2024-07-10 23:28:14.525594] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:05.641 [2024-07-10 23:28:14.525703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477247 ] 00:24:05.641 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.641 [2024-07-10 23:28:14.625061] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.901 [2024-07-10 23:28:14.840495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.469 23:28:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:06.469 23:28:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:06.469 23:28:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WDRewWVc2O 00:24:06.469 [2024-07-10 23:28:15.463400] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:06.470 [2024-07-10 23:28:15.463500] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:06.729 TLSTESTn1 00:24:06.729 23:28:15 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:06.729 Running I/O for 10 seconds... 00:24:16.705 00:24:16.705 Latency(us) 00:24:16.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.705 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:16.705 Verification LBA range: start 0x0 length 0x2000 00:24:16.705 TLSTESTn1 : 10.04 3383.23 13.22 0.00 0.00 37758.12 7693.36 43766.65 00:24:16.705 =================================================================================================================== 00:24:16.705 Total : 3383.23 13.22 0.00 0.00 37758.12 7693.36 43766.65 00:24:16.705 0 00:24:16.705 23:28:25 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:16.705 23:28:25 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2477247 00:24:16.705 23:28:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2477247 ']' 00:24:16.705 23:28:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2477247 00:24:16.706 23:28:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:16.706 23:28:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.706 23:28:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2477247 00:24:16.965 23:28:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:16.965 23:28:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:16.965 23:28:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2477247' 00:24:16.965 killing process with pid 2477247 00:24:16.965 23:28:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2477247 00:24:16.965 Received shutdown signal, test time was about 10.000000 seconds 00:24:16.965 00:24:16.965 Latency(us) 00:24:16.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.965 =================================================================================================================== 00:24:16.965 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.965 [2024-07-10 23:28:25.783520] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:16.965 23:28:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2477247 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uhWZ4QRX9w 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uhWZ4QRX9w 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uhWZ4QRX9w 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uhWZ4QRX9w' 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2479300 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2479300 /var/tmp/bdevperf.sock 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2479300 ']' 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.902 23:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.902 [2024-07-10 23:28:26.945002] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:17.902 [2024-07-10 23:28:26.945111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2479300 ] 00:24:18.161 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.161 [2024-07-10 23:28:27.044540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.420 [2024-07-10 23:28:27.264997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.679 23:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:18.679 23:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:18.679 23:28:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uhWZ4QRX9w 00:24:18.938 [2024-07-10 23:28:27.893366] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:18.938 [2024-07-10 23:28:27.893489] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:18.938 [2024-07-10 23:28:27.903068] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:18.938 [2024-07-10 23:28:27.903750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (107): Transport endpoint is not connected 00:24:18.938 [2024-07-10 23:28:27.904727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:24:18.938 [2024-07-10 23:28:27.905726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.938 [2024-07-10 23:28:27.905743] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:18.938 [2024-07-10 23:28:27.905763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.938 request: 00:24:18.939 { 00:24:18.939 "name": "TLSTEST", 00:24:18.939 "trtype": "tcp", 00:24:18.939 "traddr": "10.0.0.2", 00:24:18.939 "adrfam": "ipv4", 00:24:18.939 "trsvcid": "4420", 00:24:18.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:18.939 "prchk_reftag": false, 00:24:18.939 "prchk_guard": false, 00:24:18.939 "hdgst": false, 00:24:18.939 "ddgst": false, 00:24:18.939 "psk": "/tmp/tmp.uhWZ4QRX9w", 00:24:18.939 "method": "bdev_nvme_attach_controller", 00:24:18.939 "req_id": 1 00:24:18.939 } 00:24:18.939 Got JSON-RPC error response 00:24:18.939 response: 00:24:18.939 { 00:24:18.939 "code": -5, 00:24:18.939 "message": "Input/output error" 00:24:18.939 } 00:24:18.939 23:28:27 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2479300 00:24:18.939 23:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2479300 ']' 00:24:18.939 23:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2479300 00:24:18.939 23:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:18.939 23:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:18.939 23:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2479300 00:24:18.939 23:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:18.939 23:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:18.939 23:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2479300' 00:24:18.939 killing process with pid 2479300 00:24:18.939 23:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2479300 00:24:18.939 Received shutdown signal, test time was about 10.000000 seconds 00:24:18.939 00:24:18.939 Latency(us) 00:24:18.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.939 =================================================================================================================== 00:24:18.939 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:18.939 [2024-07-10 23:28:27.967022] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:18.939 23:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2479300 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WDRewWVc2O 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WDRewWVc2O 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WDRewWVc2O 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WDRewWVc2O' 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2479760 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2479760 /var/tmp/bdevperf.sock 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2479760 ']' 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.317 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.317 [2024-07-10 23:28:29.098893] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:20.317 [2024-07-10 23:28:29.098986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2479760 ] 00:24:20.317 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.317 [2024-07-10 23:28:29.197600] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.577 [2024-07-10 23:28:29.414697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.836 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:20.836 23:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:20.836 23:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.WDRewWVc2O 00:24:21.095 [2024-07-10 23:28:30.027332] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:21.095 [2024-07-10 23:28:30.027448] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:21.095 [2024-07-10 23:28:30.038207] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:21.095 [2024-07-10 23:28:30.038243] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:21.095 [2024-07-10 23:28:30.038289] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:21.095 [2024-07-10 23:28:30.039005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (107): Transport endpoint is not connected 00:24:21.095 [2024-07-10 23:28:30.039984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:24:21.095 [2024-07-10 23:28:30.040978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:21.095 [2024-07-10 23:28:30.041005] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:21.095 [2024-07-10 23:28:30.041020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:21.095 request: 00:24:21.095 { 00:24:21.095 "name": "TLSTEST", 00:24:21.095 "trtype": "tcp", 00:24:21.095 "traddr": "10.0.0.2", 00:24:21.095 "adrfam": "ipv4", 00:24:21.095 "trsvcid": "4420", 00:24:21.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.095 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:21.095 "prchk_reftag": false, 00:24:21.095 "prchk_guard": false, 00:24:21.095 "hdgst": false, 00:24:21.095 "ddgst": false, 00:24:21.095 "psk": "/tmp/tmp.WDRewWVc2O", 00:24:21.095 "method": "bdev_nvme_attach_controller", 00:24:21.095 "req_id": 1 00:24:21.095 } 00:24:21.095 Got JSON-RPC error response 00:24:21.095 response: 00:24:21.095 { 00:24:21.095 "code": -5, 00:24:21.095 "message": "Input/output error" 00:24:21.095 } 00:24:21.095 23:28:30 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2479760 00:24:21.095 23:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2479760 ']' 00:24:21.095 23:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2479760 00:24:21.095 23:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:21.095 23:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:21.095 23:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2479760 00:24:21.095 23:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:21.095 23:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:21.095 23:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2479760' 00:24:21.095 killing process with pid 2479760 00:24:21.095 23:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2479760 00:24:21.095 Received shutdown signal, test time was about 10.000000 seconds 00:24:21.095 00:24:21.095 Latency(us) 00:24:21.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.095 =================================================================================================================== 00:24:21.095 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:21.095 [2024-07-10 23:28:30.098460] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' 23:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2479760 00:24:21.095 scheduled for removal in v24.09 hit 1 times 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WDRewWVc2O 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WDRewWVc2O 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WDRewWVc2O 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WDRewWVc2O' 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2480006 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2480006 /var/tmp/bdevperf.sock 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2480006 ']' 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:22.475 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.475 [2024-07-10 23:28:31.221911] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:22.475 [2024-07-10 23:28:31.222002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2480006 ] 00:24:22.475 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.475 [2024-07-10 23:28:31.319763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.735 [2024-07-10 23:28:31.543623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.995 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:22.995 23:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:22.995 23:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WDRewWVc2O 00:24:23.254 [2024-07-10 23:28:32.150221] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:23.254 [2024-07-10 23:28:32.150344] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:23.254 [2024-07-10 23:28:32.163744] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:23.254 [2024-07-10 23:28:32.163775] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:23.254 [2024-07-10 23:28:32.163810] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:23.254 [2024-07-10 23:28:32.164750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (107): Transport endpoint is not connected 00:24:23.254 [2024-07-10 23:28:32.165724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:24:23.254 [2024-07-10 23:28:32.166725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:23.254 [2024-07-10 23:28:32.166744] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:23.254 [2024-07-10 23:28:32.166760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:23.254 request: 00:24:23.254 { 00:24:23.254 "name": "TLSTEST", 00:24:23.254 "trtype": "tcp", 00:24:23.254 "traddr": "10.0.0.2", 00:24:23.254 "adrfam": "ipv4", 00:24:23.254 "trsvcid": "4420", 00:24:23.254 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:23.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:23.254 "prchk_reftag": false, 00:24:23.254 "prchk_guard": false, 00:24:23.254 "hdgst": false, 00:24:23.254 "ddgst": false, 00:24:23.254 "psk": "/tmp/tmp.WDRewWVc2O", 00:24:23.254 "method": "bdev_nvme_attach_controller", 00:24:23.254 "req_id": 1 00:24:23.254 } 00:24:23.254 Got JSON-RPC error response 00:24:23.254 response: 00:24:23.254 { 00:24:23.254 "code": -5, 00:24:23.254 "message": "Input/output error" 00:24:23.254 } 00:24:23.254 23:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2480006 00:24:23.254 23:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2480006 ']' 00:24:23.254 23:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2480006 00:24:23.254 23:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:23.254 23:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:23.254 23:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2480006 00:24:23.254 23:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:23.254 23:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:23.254 23:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2480006' 00:24:23.254 killing process with pid 2480006 00:24:23.254 23:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2480006 00:24:23.254 Received shutdown signal, test time was about 10.000000 seconds 00:24:23.254 00:24:23.254 Latency(us) 00:24:23.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.254 =================================================================================================================== 00:24:23.254 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:23.254 [2024-07-10 23:28:32.229095] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:23.254 23:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2480006 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2480465 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2480465 /var/tmp/bdevperf.sock 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2480465 ']' 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:24.637 23:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.637 [2024-07-10 23:28:33.348527] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:24.637 [2024-07-10 23:28:33.348622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2480465 ] 00:24:24.637 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.637 [2024-07-10 23:28:33.445581] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.637 [2024-07-10 23:28:33.664171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:25.202 23:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:25.202 23:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:25.202 23:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:25.471 [2024-07-10 23:28:34.279278] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:25.471 [2024-07-10 23:28:34.281271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032cd80 (9): Bad file descriptor 00:24:25.471 [2024-07-10 23:28:34.282261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.471 [2024-07-10 23:28:34.282290] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:25.471 [2024-07-10 23:28:34.282304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.471 request: 00:24:25.471 { 00:24:25.471 "name": "TLSTEST", 00:24:25.471 "trtype": "tcp", 00:24:25.471 "traddr": "10.0.0.2", 00:24:25.471 "adrfam": "ipv4", 00:24:25.471 "trsvcid": "4420", 00:24:25.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:25.471 "prchk_reftag": false, 00:24:25.471 "prchk_guard": false, 00:24:25.471 "hdgst": false, 00:24:25.471 "ddgst": false, 00:24:25.471 "method": "bdev_nvme_attach_controller", 00:24:25.471 "req_id": 1 00:24:25.471 } 00:24:25.471 Got JSON-RPC error response 00:24:25.471 response: 00:24:25.471 { 00:24:25.471 "code": -5, 00:24:25.471 "message": "Input/output error" 00:24:25.471 } 00:24:25.471 23:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2480465 00:24:25.471 23:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2480465 ']' 00:24:25.471 23:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2480465 00:24:25.471 23:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:25.471 23:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:25.471 23:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2480465 00:24:25.471 23:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:25.471 23:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:25.471 23:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2480465' 00:24:25.471 killing process with pid 2480465 00:24:25.471 23:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2480465 00:24:25.471 Received shutdown signal, test time was about 10.000000 seconds 00:24:25.471 00:24:25.471 Latency(us) 00:24:25.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.471 =================================================================================================================== 00:24:25.471 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:25.471 23:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2480465 00:24:26.481 23:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:26.481 23:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:26.481 23:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:26.481 23:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:26.481 23:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:26.481 23:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2474888 00:24:26.481 23:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2474888 ']' 00:24:26.481 23:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2474888 00:24:26.481 23:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:26.481 23:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:26.481 23:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2474888 00:24:26.481 23:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:26.481 23:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:26.482 23:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2474888' 00:24:26.482 killing process with pid 2474888 00:24:26.482 23:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2474888 00:24:26.482 [2024-07-10 23:28:35.448518] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:26.482 23:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2474888 00:24:27.860 23:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:27.860 23:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:27.860 23:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:27.860 23:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:27.860 23:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:27.860 23:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:24:27.860 23:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.PDlaRDnbEz 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.PDlaRDnbEz 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2480957 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2480957 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2480957 ']' 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.120 23:28:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.120 [2024-07-10 23:28:37.050768] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:28.120 [2024-07-10 23:28:37.050859] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.120 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.120 [2024-07-10 23:28:37.160900] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.380 [2024-07-10 23:28:37.373463] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.380 [2024-07-10 23:28:37.373509] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.380 [2024-07-10 23:28:37.373521] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.380 [2024-07-10 23:28:37.373532] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.380 [2024-07-10 23:28:37.373542] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.380 [2024-07-10 23:28:37.373577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.950 23:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:28.950 23:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:28.950 23:28:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:28.950 23:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:28.950 23:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.950 23:28:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.950 23:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.PDlaRDnbEz 00:24:28.950 23:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PDlaRDnbEz 00:24:28.950 23:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:28.950 [2024-07-10 23:28:38.010642] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.210 23:28:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:29.210 23:28:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:29.468 [2024-07-10 23:28:38.351537] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:29.468 [2024-07-10 23:28:38.351743] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.468 23:28:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:29.727 malloc0 00:24:29.727 23:28:38 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:29.727 23:28:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PDlaRDnbEz 00:24:29.986 [2024-07-10 23:28:38.915671] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:29.986 23:28:38 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PDlaRDnbEz 00:24:29.986 23:28:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:29.986 23:28:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:29.986 23:28:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:29.986 23:28:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PDlaRDnbEz' 00:24:29.986 23:28:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:29.986 23:28:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:29.986 23:28:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2481430 00:24:29.986 23:28:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:29.986 23:28:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2481430 /var/tmp/bdevperf.sock 00:24:29.986 23:28:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2481430 ']' 00:24:29.986 23:28:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:29.986 23:28:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:29.986 23:28:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:29.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:29.986 23:28:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:29.986 23:28:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.986 [2024-07-10 23:28:38.989599] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:29.986 [2024-07-10 23:28:38.989689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2481430 ] 00:24:29.986 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.245 [2024-07-10 23:28:39.089390] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.245 [2024-07-10 23:28:39.309265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:30.813 23:28:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:30.813 23:28:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:30.813 23:28:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PDlaRDnbEz 00:24:31.072 [2024-07-10 23:28:39.928302] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:31.072 [2024-07-10 23:28:39.928407] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:31.072 TLSTESTn1 00:24:31.072 23:28:40 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:31.072 Running I/O for 10 seconds... 00:24:43.281 00:24:43.281 Latency(us) 00:24:43.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.281 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:43.281 Verification LBA range: start 0x0 length 0x2000 00:24:43.281 TLSTESTn1 : 10.03 4529.90 17.69 0.00 0.00 28202.84 7151.97 50605.19 00:24:43.281 =================================================================================================================== 00:24:43.281 Total : 4529.90 17.69 0.00 0.00 28202.84 7151.97 50605.19 00:24:43.281 0 00:24:43.281 23:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:43.281 23:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2481430 00:24:43.281 23:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2481430 ']' 00:24:43.281 23:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2481430 00:24:43.281 23:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:43.281 23:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:43.281 23:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2481430 00:24:43.281 23:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:43.281 23:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:43.281 23:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2481430' 00:24:43.281 killing process with pid 2481430 00:24:43.281 23:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2481430 00:24:43.281 Received shutdown signal, test time was about 10.000000 seconds 00:24:43.281 00:24:43.281 Latency(us) 00:24:43.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.281 =================================================================================================================== 00:24:43.281 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:43.281 [2024-07-10 23:28:50.227650] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:43.281 23:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2481430 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.PDlaRDnbEz 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PDlaRDnbEz 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PDlaRDnbEz 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PDlaRDnbEz 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PDlaRDnbEz' 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2483393 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2483393 /var/tmp/bdevperf.sock 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2483393 ']' 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.281 23:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.281 [2024-07-10 23:28:51.403431] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:43.281 [2024-07-10 23:28:51.403526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2483393 ] 00:24:43.281 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.281 [2024-07-10 23:28:51.506102] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.281 [2024-07-10 23:28:51.733147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.281 23:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:43.281 23:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:43.281 23:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PDlaRDnbEz 00:24:43.281 [2024-07-10 23:28:52.334223] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:43.281 [2024-07-10 23:28:52.334291] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:43.281 [2024-07-10 23:28:52.334302] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.PDlaRDnbEz 00:24:43.281 request: 00:24:43.281 { 00:24:43.281 "name": "TLSTEST", 00:24:43.281 "trtype": "tcp", 00:24:43.281 "traddr": "10.0.0.2", 00:24:43.281 "adrfam": "ipv4", 00:24:43.281 "trsvcid": "4420", 00:24:43.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:43.281 "prchk_reftag": false, 00:24:43.281 "prchk_guard": false, 00:24:43.281 "hdgst": false, 00:24:43.281 "ddgst": false, 00:24:43.281 "psk": "/tmp/tmp.PDlaRDnbEz", 00:24:43.281 "method": "bdev_nvme_attach_controller", 00:24:43.281 "req_id": 1 00:24:43.281 } 00:24:43.281 Got JSON-RPC error response 00:24:43.281 response: 00:24:43.281 { 00:24:43.281 "code": -1, 00:24:43.281 "message": "Operation not permitted" 00:24:43.281 } 00:24:43.539 23:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2483393 00:24:43.539 23:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2483393 ']' 00:24:43.539 23:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2483393 00:24:43.539 23:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:43.539 23:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:43.539 23:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2483393 00:24:43.539 23:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:43.539 23:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:43.539 23:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2483393' 00:24:43.539 killing process with pid 2483393 00:24:43.539 23:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2483393 00:24:43.539 Received shutdown signal, test time was about 10.000000 seconds 00:24:43.539 00:24:43.539 Latency(us) 00:24:43.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.539 =================================================================================================================== 00:24:43.539 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:43.540 23:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2483393 00:24:44.477 23:28:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:44.477 23:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:44.477 23:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:44.477 23:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:44.477 23:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:44.477 23:28:53 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2480957 00:24:44.477 23:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2480957 ']' 00:24:44.477 23:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2480957 00:24:44.477 23:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:44.477 23:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:44.477 23:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2480957 00:24:44.477 23:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:44.477 23:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:44.477 23:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2480957' 00:24:44.477 killing process with pid 2480957 00:24:44.477 23:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2480957 00:24:44.477 [2024-07-10 23:28:53.502348] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:44.477 23:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2480957 00:24:45.853 23:28:54 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:24:45.853 23:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.853 23:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:45.853 23:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.853 23:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:45.853 23:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2483969 00:24:45.853 23:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2483969 00:24:45.853 23:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2483969 ']' 00:24:45.853 23:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.853 23:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.853 23:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.854 23:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.854 23:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.112 [2024-07-10 23:28:54.976479] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:46.112 [2024-07-10 23:28:54.976582] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.112 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.112 [2024-07-10 23:28:55.079268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.371 [2024-07-10 23:28:55.280671] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.371 [2024-07-10 23:28:55.280718] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.371 [2024-07-10 23:28:55.280729] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.371 [2024-07-10 23:28:55.280756] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.371 [2024-07-10 23:28:55.280765] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.371 [2024-07-10 23:28:55.280794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.939 23:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.939 23:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:46.939 23:28:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:46.939 23:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:46.939 23:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.939 23:28:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.939 23:28:55 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.PDlaRDnbEz 00:24:46.939 23:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:46.939 23:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.PDlaRDnbEz 00:24:46.939 23:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:24:46.939 23:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:46.939 23:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:24:46.939 23:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:46.939 23:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.PDlaRDnbEz 00:24:46.939 23:28:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PDlaRDnbEz 00:24:46.939 23:28:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:46.939 [2024-07-10 23:28:55.941737] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.939 23:28:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:47.198 23:28:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:47.457 [2024-07-10 23:28:56.278609] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:47.457 [2024-07-10 23:28:56.278841] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.457 23:28:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:47.457 malloc0 00:24:47.457 23:28:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:47.715 23:28:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PDlaRDnbEz 00:24:47.975 [2024-07-10 23:28:56.838985] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:47.975 [2024-07-10 23:28:56.839024] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:24:47.975 [2024-07-10 23:28:56.839064] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:47.975 request: 00:24:47.975 { 00:24:47.975 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.975 "host": "nqn.2016-06.io.spdk:host1", 00:24:47.975 "psk": "/tmp/tmp.PDlaRDnbEz", 00:24:47.975 "method": "nvmf_subsystem_add_host", 00:24:47.975 "req_id": 1 00:24:47.975 } 00:24:47.975 Got JSON-RPC error response 00:24:47.975 response: 00:24:47.975 { 00:24:47.975 "code": -32603, 00:24:47.975 "message": "Internal error" 00:24:47.975 } 00:24:47.975 23:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:47.975 23:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:47.975 23:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:47.975 23:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:47.975 23:28:56 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2483969 00:24:47.975 23:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2483969 ']' 00:24:47.975 23:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2483969 00:24:47.975 23:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:47.975 23:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:47.975 23:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2483969 00:24:47.975 23:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:47.975 23:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:47.975 23:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2483969' 00:24:47.975 killing process with pid 2483969 00:24:47.975 23:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2483969 00:24:47.975 23:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2483969 00:24:49.354 23:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.PDlaRDnbEz 00:24:49.354 23:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:24:49.354 23:28:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:49.354 23:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:49.354 23:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.354 23:28:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2484561 00:24:49.354 23:28:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2484561 00:24:49.354 23:28:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:49.354 23:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2484561 ']' 00:24:49.354 23:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.354 23:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:49.354 23:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.354 23:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:49.354 23:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.354 [2024-07-10 23:28:58.355459] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:49.354 [2024-07-10 23:28:58.355547] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.354 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.614 [2024-07-10 23:28:58.463525] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.614 [2024-07-10 23:28:58.674481] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.614 [2024-07-10 23:28:58.674526] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.614 [2024-07-10 23:28:58.674538] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.614 [2024-07-10 23:28:58.674564] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.614 [2024-07-10 23:28:58.674574] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.614 [2024-07-10 23:28:58.674604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.182 23:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:50.182 23:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:50.182 23:28:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:50.182 23:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:50.182 23:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:50.182 23:28:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.182 23:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.PDlaRDnbEz 00:24:50.182 23:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PDlaRDnbEz 00:24:50.182 23:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:50.442 [2024-07-10 23:28:59.311860] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.442 23:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:50.442 23:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:50.701 [2024-07-10 23:28:59.640734] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:50.701 [2024-07-10 23:28:59.640981] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.701 23:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:50.960 malloc0 00:24:50.960 23:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:51.220 23:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PDlaRDnbEz 00:24:51.220 [2024-07-10 23:29:00.221704] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:51.220 23:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2484936 00:24:51.220 23:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:51.220 23:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:51.220 23:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2484936 /var/tmp/bdevperf.sock 00:24:51.220 23:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2484936 ']' 00:24:51.220 23:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:51.220 23:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:51.220 23:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:51.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:51.220 23:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:51.220 23:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:51.479 [2024-07-10 23:29:00.309596] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:51.479 [2024-07-10 23:29:00.309688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484936 ] 00:24:51.479 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.479 [2024-07-10 23:29:00.409843] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.738 [2024-07-10 23:29:00.634190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.324 23:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:52.324 23:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:52.324 23:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PDlaRDnbEz 00:24:52.324 [2024-07-10 23:29:01.224117] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:52.324 [2024-07-10 23:29:01.224224] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:52.324 TLSTESTn1 00:24:52.324 23:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:52.605 23:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:24:52.605 "subsystems": [ 00:24:52.605 { 00:24:52.605 "subsystem": "keyring", 00:24:52.605 "config": [] 00:24:52.605 }, 00:24:52.605 { 00:24:52.605 "subsystem": "iobuf", 00:24:52.605 "config": [ 00:24:52.605 { 00:24:52.605 "method": "iobuf_set_options", 00:24:52.605 "params": { 00:24:52.605 "small_pool_count": 8192, 00:24:52.605 "large_pool_count": 1024, 00:24:52.605 "small_bufsize": 8192, 00:24:52.605 "large_bufsize": 135168 00:24:52.605 } 00:24:52.605 } 00:24:52.605 ] 00:24:52.605 }, 00:24:52.605 { 00:24:52.605 "subsystem": "sock", 00:24:52.605 "config": [ 00:24:52.605 { 00:24:52.605 "method": "sock_set_default_impl", 00:24:52.605 "params": { 00:24:52.605 "impl_name": "posix" 00:24:52.605 } 00:24:52.605 }, 00:24:52.605 { 00:24:52.605 "method": "sock_impl_set_options", 00:24:52.605 "params": { 00:24:52.605 "impl_name": "ssl", 00:24:52.605 "recv_buf_size": 4096, 00:24:52.605 "send_buf_size": 4096, 00:24:52.605 "enable_recv_pipe": true, 00:24:52.605 "enable_quickack": false, 00:24:52.605 "enable_placement_id": 0, 00:24:52.605 "enable_zerocopy_send_server": true, 00:24:52.605 "enable_zerocopy_send_client": false, 00:24:52.605 "zerocopy_threshold": 0, 00:24:52.605 "tls_version": 0, 00:24:52.605 "enable_ktls": false 00:24:52.605 } 00:24:52.605 }, 00:24:52.605 { 00:24:52.605 "method": "sock_impl_set_options", 00:24:52.605 "params": { 00:24:52.605 "impl_name": "posix", 00:24:52.605 "recv_buf_size": 2097152, 00:24:52.605 "send_buf_size": 2097152, 00:24:52.605 "enable_recv_pipe": true, 00:24:52.605 "enable_quickack": false, 00:24:52.605 "enable_placement_id": 0, 00:24:52.605 "enable_zerocopy_send_server": true, 00:24:52.605 "enable_zerocopy_send_client": false, 00:24:52.605 "zerocopy_threshold": 0, 00:24:52.605 "tls_version": 0, 00:24:52.605 "enable_ktls": false 00:24:52.605 } 00:24:52.605 } 00:24:52.605 ] 00:24:52.605 }, 00:24:52.605 { 00:24:52.605 "subsystem": "vmd", 00:24:52.605 "config": [] 00:24:52.605 }, 00:24:52.605 { 00:24:52.605 "subsystem": "accel", 00:24:52.605 "config": [ 00:24:52.605 { 00:24:52.605 "method": "accel_set_options", 00:24:52.605 "params": { 00:24:52.605 "small_cache_size": 128, 00:24:52.605 "large_cache_size": 16, 00:24:52.605 "task_count": 2048, 00:24:52.605 "sequence_count": 2048, 00:24:52.605 "buf_count": 2048 00:24:52.605 } 00:24:52.605 } 00:24:52.605 ] 00:24:52.605 }, 00:24:52.605 { 00:24:52.605 "subsystem": "bdev", 00:24:52.605 "config": [ 00:24:52.605 { 00:24:52.605 "method": "bdev_set_options", 00:24:52.605 "params": { 00:24:52.605 "bdev_io_pool_size": 65535, 00:24:52.605 "bdev_io_cache_size": 256, 00:24:52.605 "bdev_auto_examine": true, 00:24:52.605 "iobuf_small_cache_size": 128, 00:24:52.605 "iobuf_large_cache_size": 16 00:24:52.605 } 00:24:52.605 }, 00:24:52.605 { 00:24:52.605 "method": "bdev_raid_set_options", 00:24:52.605 "params": { 00:24:52.605 "process_window_size_kb": 1024 00:24:52.605 } 00:24:52.605 }, 00:24:52.605 { 00:24:52.605 "method": "bdev_iscsi_set_options", 00:24:52.605 "params": { 00:24:52.605 "timeout_sec": 30 00:24:52.605 } 00:24:52.605 }, 00:24:52.605 { 00:24:52.605 "method": "bdev_nvme_set_options", 00:24:52.605 "params": { 00:24:52.605 "action_on_timeout": "none", 00:24:52.605 "timeout_us": 0, 00:24:52.605 "timeout_admin_us": 0, 00:24:52.605 "keep_alive_timeout_ms": 10000, 00:24:52.605 "arbitration_burst": 0, 00:24:52.605 "low_priority_weight": 0, 00:24:52.605 "medium_priority_weight": 0, 00:24:52.605 "high_priority_weight": 0, 00:24:52.605 "nvme_adminq_poll_period_us": 10000, 00:24:52.605 "nvme_ioq_poll_period_us": 0, 00:24:52.605 "io_queue_requests": 0, 00:24:52.605 "delay_cmd_submit": true, 00:24:52.605 "transport_retry_count": 4, 00:24:52.605 "bdev_retry_count": 3, 00:24:52.605 "transport_ack_timeout": 0, 00:24:52.605 "ctrlr_loss_timeout_sec": 0, 00:24:52.605 "reconnect_delay_sec": 0, 00:24:52.605 "fast_io_fail_timeout_sec": 0, 00:24:52.605 "disable_auto_failback": false, 00:24:52.605 "generate_uuids": false, 00:24:52.605 "transport_tos": 0, 00:24:52.605 "nvme_error_stat": false, 00:24:52.605 "rdma_srq_size": 0, 00:24:52.605 "io_path_stat": false, 00:24:52.605 "allow_accel_sequence": false, 00:24:52.605 "rdma_max_cq_size": 0, 00:24:52.605 "rdma_cm_event_timeout_ms": 0, 00:24:52.605 "dhchap_digests": [ 00:24:52.605 "sha256", 00:24:52.605 "sha384", 00:24:52.605 "sha512" 00:24:52.605 ], 00:24:52.605 "dhchap_dhgroups": [ 00:24:52.605 "null", 00:24:52.605 "ffdhe2048", 00:24:52.605 "ffdhe3072", 00:24:52.605 "ffdhe4096", 00:24:52.605 "ffdhe6144", 00:24:52.605 "ffdhe8192" 00:24:52.606 ] 00:24:52.606 } 00:24:52.606 }, 00:24:52.606 { 00:24:52.606 "method": "bdev_nvme_set_hotplug", 00:24:52.606 "params": { 00:24:52.606 "period_us": 100000, 00:24:52.606 "enable": false 00:24:52.606 } 00:24:52.606 }, 00:24:52.606 { 00:24:52.606 "method": "bdev_malloc_create", 00:24:52.606 "params": { 00:24:52.606 "name": "malloc0", 00:24:52.606 "num_blocks": 8192, 00:24:52.606 "block_size": 4096, 00:24:52.606 "physical_block_size": 4096, 00:24:52.606 "uuid": "7cd619bf-4651-46d9-9eee-bd35cbe04c83", 00:24:52.606 "optimal_io_boundary": 0 00:24:52.606 } 00:24:52.606 }, 00:24:52.606 { 00:24:52.606 "method": "bdev_wait_for_examine" 00:24:52.606 } 00:24:52.606 ] 00:24:52.606 }, 00:24:52.606 { 00:24:52.606 "subsystem": "nbd", 00:24:52.606 "config": [] 00:24:52.606 }, 00:24:52.606 { 00:24:52.606 "subsystem": "scheduler", 00:24:52.606 "config": [ 00:24:52.606 { 00:24:52.606 "method": "framework_set_scheduler", 00:24:52.606 "params": { 00:24:52.606 "name": "static" 00:24:52.606 } 00:24:52.606 } 00:24:52.606 ] 00:24:52.606 }, 00:24:52.606 { 00:24:52.606 "subsystem": "nvmf", 00:24:52.606 "config": [ 00:24:52.606 { 00:24:52.606 "method": "nvmf_set_config", 00:24:52.606 "params": { 00:24:52.606 "discovery_filter": "match_any", 00:24:52.606 "admin_cmd_passthru": { 00:24:52.606 "identify_ctrlr": false 00:24:52.606 } 00:24:52.606 } 00:24:52.606 }, 00:24:52.606 { 00:24:52.606 "method": "nvmf_set_max_subsystems", 00:24:52.606 "params": { 00:24:52.606 "max_subsystems": 1024 00:24:52.606 } 00:24:52.606 }, 00:24:52.606 { 00:24:52.606 "method": "nvmf_set_crdt", 00:24:52.606 "params": { 00:24:52.606 "crdt1": 0, 00:24:52.606 "crdt2": 0, 00:24:52.606 "crdt3": 0 00:24:52.606 } 00:24:52.606 }, 00:24:52.606 { 00:24:52.606 "method": "nvmf_create_transport", 00:24:52.606 "params": { 00:24:52.606 "trtype": "TCP", 00:24:52.606 "max_queue_depth": 128, 00:24:52.606 "max_io_qpairs_per_ctrlr": 127, 00:24:52.606 "in_capsule_data_size": 4096, 00:24:52.606 "max_io_size": 131072, 00:24:52.606 "io_unit_size": 131072, 00:24:52.606 "max_aq_depth": 128, 00:24:52.606 "num_shared_buffers": 511, 00:24:52.606 "buf_cache_size": 4294967295, 00:24:52.606 "dif_insert_or_strip": false, 00:24:52.606 "zcopy": false, 00:24:52.606 "c2h_success": false, 00:24:52.606 "sock_priority": 0, 00:24:52.606 "abort_timeout_sec": 1, 00:24:52.606 "ack_timeout": 0, 00:24:52.606 "data_wr_pool_size": 0 00:24:52.606 } 00:24:52.606 }, 00:24:52.606 { 00:24:52.606 "method": "nvmf_create_subsystem", 00:24:52.606 "params": { 00:24:52.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.606 "allow_any_host": false, 00:24:52.606 "serial_number": "SPDK00000000000001", 00:24:52.606 "model_number": "SPDK bdev Controller", 00:24:52.606 "max_namespaces": 10, 00:24:52.606 "min_cntlid": 1, 00:24:52.606 "max_cntlid": 65519, 00:24:52.606 "ana_reporting": false 00:24:52.606 } 00:24:52.606 }, 00:24:52.606 { 00:24:52.606 "method": "nvmf_subsystem_add_host", 00:24:52.606 "params": { 00:24:52.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.606 "host": "nqn.2016-06.io.spdk:host1", 00:24:52.606 "psk": "/tmp/tmp.PDlaRDnbEz" 00:24:52.606 } 00:24:52.606 }, 00:24:52.606 { 00:24:52.606 "method": "nvmf_subsystem_add_ns", 00:24:52.606 "params": { 00:24:52.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.606 "namespace": { 00:24:52.606 "nsid": 1, 00:24:52.606 "bdev_name": "malloc0", 00:24:52.606 "nguid": "7CD619BF465146D99EEEBD35CBE04C83", 00:24:52.606 "uuid": "7cd619bf-4651-46d9-9eee-bd35cbe04c83", 00:24:52.606 "no_auto_visible": false 00:24:52.606 } 00:24:52.606 } 00:24:52.606 }, 00:24:52.606 { 00:24:52.606 "method": "nvmf_subsystem_add_listener", 00:24:52.606 "params": { 00:24:52.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.606 "listen_address": { 00:24:52.606 "trtype": "TCP", 00:24:52.606 "adrfam": "IPv4", 00:24:52.606 "traddr": "10.0.0.2", 00:24:52.606 "trsvcid": "4420" 00:24:52.606 }, 00:24:52.606 "secure_channel": true 00:24:52.606 } 00:24:52.606 } 00:24:52.606 ] 00:24:52.606 } 00:24:52.606 ] 00:24:52.606 }' 00:24:52.606 23:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:52.865 23:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:24:52.865 "subsystems": [ 00:24:52.865 { 00:24:52.865 "subsystem": "keyring", 00:24:52.865 "config": [] 00:24:52.865 }, 00:24:52.865 { 00:24:52.865 "subsystem": "iobuf", 00:24:52.865 "config": [ 00:24:52.865 { 00:24:52.865 "method": "iobuf_set_options", 00:24:52.865 "params": { 00:24:52.865 "small_pool_count": 8192, 00:24:52.865 "large_pool_count": 1024, 00:24:52.865 "small_bufsize": 8192, 00:24:52.865 "large_bufsize": 135168 00:24:52.865 } 00:24:52.865 } 00:24:52.865 ] 00:24:52.865 }, 00:24:52.865 { 00:24:52.865 "subsystem": "sock", 00:24:52.865 "config": [ 00:24:52.865 { 00:24:52.865 "method": "sock_set_default_impl", 00:24:52.865 "params": { 00:24:52.865 "impl_name": "posix" 00:24:52.865 } 00:24:52.865 }, 00:24:52.865 { 00:24:52.865 "method": "sock_impl_set_options", 00:24:52.865 "params": { 00:24:52.865 "impl_name": "ssl", 00:24:52.865 "recv_buf_size": 4096, 00:24:52.865 "send_buf_size": 4096, 00:24:52.865 "enable_recv_pipe": true, 00:24:52.865 "enable_quickack": false, 00:24:52.865 "enable_placement_id": 0, 00:24:52.865 "enable_zerocopy_send_server": true, 00:24:52.865 "enable_zerocopy_send_client": false, 00:24:52.865 "zerocopy_threshold": 0, 00:24:52.865 "tls_version": 0, 00:24:52.865 "enable_ktls": false 00:24:52.865 } 00:24:52.865 }, 00:24:52.865 { 00:24:52.865 "method": "sock_impl_set_options", 00:24:52.865 "params": { 00:24:52.865 "impl_name": "posix", 00:24:52.865 "recv_buf_size": 2097152, 00:24:52.865 "send_buf_size": 2097152, 00:24:52.865 "enable_recv_pipe": true, 00:24:52.865 "enable_quickack": false, 00:24:52.865 "enable_placement_id": 0, 00:24:52.865 "enable_zerocopy_send_server": true, 00:24:52.865 "enable_zerocopy_send_client": false, 00:24:52.865 "zerocopy_threshold": 0, 00:24:52.865 "tls_version": 0, 00:24:52.865 "enable_ktls": false 00:24:52.865 } 00:24:52.865 } 00:24:52.865 ] 00:24:52.865 }, 00:24:52.865 { 00:24:52.865 "subsystem": "vmd", 00:24:52.865 "config": [] 00:24:52.865 }, 00:24:52.865 { 00:24:52.865 "subsystem": "accel", 00:24:52.865 "config": [ 00:24:52.865 { 00:24:52.865 "method": "accel_set_options", 00:24:52.865 "params": { 00:24:52.865 "small_cache_size": 128, 00:24:52.865 "large_cache_size": 16, 00:24:52.865 "task_count": 2048, 00:24:52.865 "sequence_count": 2048, 00:24:52.865 "buf_count": 2048 00:24:52.865 } 00:24:52.865 } 00:24:52.865 ] 00:24:52.865 }, 00:24:52.865 { 00:24:52.866 "subsystem": "bdev", 00:24:52.866 "config": [ 00:24:52.866 { 00:24:52.866 "method": "bdev_set_options", 00:24:52.866 "params": { 00:24:52.866 "bdev_io_pool_size": 65535, 00:24:52.866 "bdev_io_cache_size": 256, 00:24:52.866 "bdev_auto_examine": true, 00:24:52.866 "iobuf_small_cache_size": 128, 00:24:52.866 "iobuf_large_cache_size": 16 00:24:52.866 } 00:24:52.866 }, 00:24:52.866 { 00:24:52.866 "method": "bdev_raid_set_options", 00:24:52.866 "params": { 00:24:52.866 "process_window_size_kb": 1024 00:24:52.866 } 00:24:52.866 }, 00:24:52.866 { 00:24:52.866 "method": "bdev_iscsi_set_options", 00:24:52.866 "params": { 00:24:52.866 "timeout_sec": 30 00:24:52.866 } 00:24:52.866 }, 00:24:52.866 { 00:24:52.866 "method": "bdev_nvme_set_options", 00:24:52.866 "params": { 00:24:52.866 "action_on_timeout": "none", 00:24:52.866 "timeout_us": 0, 00:24:52.866 "timeout_admin_us": 0, 00:24:52.866 "keep_alive_timeout_ms": 10000, 00:24:52.866 "arbitration_burst": 0, 00:24:52.866 "low_priority_weight": 0, 00:24:52.866 "medium_priority_weight": 0, 00:24:52.866 "high_priority_weight": 0, 00:24:52.866 "nvme_adminq_poll_period_us": 10000, 00:24:52.866 "nvme_ioq_poll_period_us": 0, 00:24:52.866 "io_queue_requests": 512, 00:24:52.866 "delay_cmd_submit": true, 00:24:52.866 "transport_retry_count": 4, 00:24:52.866 "bdev_retry_count": 3, 00:24:52.866 "transport_ack_timeout": 0, 00:24:52.866 "ctrlr_loss_timeout_sec": 0, 00:24:52.866 "reconnect_delay_sec": 0, 00:24:52.866 "fast_io_fail_timeout_sec": 0, 00:24:52.866 "disable_auto_failback": false, 00:24:52.866 "generate_uuids": false, 00:24:52.866 "transport_tos": 0, 00:24:52.866 "nvme_error_stat": false, 00:24:52.866 "rdma_srq_size": 0, 00:24:52.866 "io_path_stat": false, 00:24:52.866 "allow_accel_sequence": false, 00:24:52.866 "rdma_max_cq_size": 0, 00:24:52.866 "rdma_cm_event_timeout_ms": 0, 00:24:52.866 "dhchap_digests": [ 00:24:52.866 "sha256", 00:24:52.866 "sha384", 00:24:52.866 "sha512" 00:24:52.866 ], 00:24:52.866 "dhchap_dhgroups": [ 00:24:52.866 "null", 00:24:52.866 "ffdhe2048", 00:24:52.866 "ffdhe3072", 00:24:52.866 "ffdhe4096", 00:24:52.866 "ffdhe6144", 00:24:52.866 "ffdhe8192" 00:24:52.866 ] 00:24:52.866 } 00:24:52.866 }, 00:24:52.866 { 00:24:52.866 "method": "bdev_nvme_attach_controller", 00:24:52.866 "params": { 00:24:52.866 "name": "TLSTEST", 00:24:52.866 "trtype": "TCP", 00:24:52.866 "adrfam": "IPv4", 00:24:52.866 "traddr": "10.0.0.2", 00:24:52.866 "trsvcid": "4420", 00:24:52.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.866 "prchk_reftag": false, 00:24:52.866 "prchk_guard": false, 00:24:52.866 "ctrlr_loss_timeout_sec": 0, 00:24:52.866 "reconnect_delay_sec": 0, 00:24:52.866 "fast_io_fail_timeout_sec": 0, 00:24:52.866 "psk": "/tmp/tmp.PDlaRDnbEz", 00:24:52.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:52.866 "hdgst": false, 00:24:52.866 "ddgst": false 00:24:52.866 } 00:24:52.866 }, 00:24:52.866 { 00:24:52.866 "method": "bdev_nvme_set_hotplug", 00:24:52.866 "params": { 00:24:52.866 "period_us": 100000, 00:24:52.866 "enable": false 00:24:52.866 } 00:24:52.866 }, 00:24:52.866 { 00:24:52.866 "method": "bdev_wait_for_examine" 00:24:52.866 } 00:24:52.866 ] 00:24:52.866 }, 00:24:52.866 { 00:24:52.866 "subsystem": "nbd", 00:24:52.866 "config": [] 00:24:52.866 } 00:24:52.866 ] 00:24:52.866 }' 00:24:52.866 23:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2484936 00:24:52.866 23:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2484936 ']' 00:24:52.866 23:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2484936 00:24:52.866 23:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:52.866 23:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:52.866 23:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2484936 00:24:52.866 23:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:52.866 23:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:52.866 23:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2484936' 00:24:52.866 killing process with pid 2484936 00:24:52.866 23:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2484936 00:24:52.866 Received shutdown signal, test time was about 10.000000 seconds 00:24:52.866 00:24:52.866 Latency(us) 00:24:52.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.866 =================================================================================================================== 00:24:52.866 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:52.866 [2024-07-10 23:29:01.875745] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:52.866 23:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2484936 00:24:54.243 23:29:02 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2484561 00:24:54.243 23:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2484561 ']' 00:24:54.243 23:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2484561 00:24:54.243 23:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:54.243 23:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:54.243 23:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2484561 00:24:54.243 23:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:54.243 23:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:54.243 23:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2484561' 00:24:54.243 killing process with pid 2484561 00:24:54.243 23:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2484561 00:24:54.243 [2024-07-10 23:29:02.976117] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:54.243 23:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2484561 00:24:55.622 23:29:04 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:55.622 23:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:55.622 23:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:55.622 23:29:04 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:24:55.622 "subsystems": [ 00:24:55.622 { 00:24:55.622 "subsystem": "keyring", 00:24:55.622 "config": [] 00:24:55.622 }, 00:24:55.622 { 00:24:55.622 "subsystem": "iobuf", 00:24:55.622 "config": [ 00:24:55.622 { 00:24:55.622 "method": "iobuf_set_options", 00:24:55.622 "params": { 00:24:55.622 "small_pool_count": 8192, 00:24:55.622 "large_pool_count": 1024, 00:24:55.622 "small_bufsize": 8192, 00:24:55.622 "large_bufsize": 135168 00:24:55.622 } 00:24:55.622 } 00:24:55.622 ] 00:24:55.622 }, 00:24:55.622 { 00:24:55.622 "subsystem": "sock", 00:24:55.622 "config": [ 00:24:55.622 { 00:24:55.622 "method": "sock_set_default_impl", 00:24:55.622 "params": { 00:24:55.622 "impl_name": "posix" 00:24:55.622 } 00:24:55.622 }, 00:24:55.622 { 00:24:55.622 "method": "sock_impl_set_options", 00:24:55.622 "params": { 00:24:55.622 "impl_name": "ssl", 00:24:55.622 "recv_buf_size": 4096, 00:24:55.622 "send_buf_size": 4096, 00:24:55.622 "enable_recv_pipe": true, 00:24:55.622 "enable_quickack": false, 00:24:55.622 "enable_placement_id": 0, 00:24:55.622 "enable_zerocopy_send_server": true, 00:24:55.622 "enable_zerocopy_send_client": false, 00:24:55.622 "zerocopy_threshold": 0, 00:24:55.622 "tls_version": 0, 00:24:55.622 "enable_ktls": false 00:24:55.622 } 00:24:55.622 }, 00:24:55.622 { 00:24:55.622 "method": "sock_impl_set_options", 00:24:55.622 "params": { 00:24:55.622 "impl_name": "posix", 00:24:55.622 "recv_buf_size": 2097152, 00:24:55.622 "send_buf_size": 2097152, 00:24:55.622 "enable_recv_pipe": true, 00:24:55.622 "enable_quickack": false, 00:24:55.622 "enable_placement_id": 0, 00:24:55.622 "enable_zerocopy_send_server": true, 00:24:55.622 "enable_zerocopy_send_client": false, 00:24:55.622 "zerocopy_threshold": 0, 00:24:55.622 "tls_version": 0, 00:24:55.622 "enable_ktls": false 00:24:55.622 } 00:24:55.622 } 00:24:55.622 ] 00:24:55.622 }, 00:24:55.622 { 00:24:55.622 "subsystem": "vmd", 00:24:55.623 "config": [] 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "subsystem": "accel", 00:24:55.623 "config": [ 00:24:55.623 { 00:24:55.623 "method": "accel_set_options", 00:24:55.623 "params": { 00:24:55.623 "small_cache_size": 128, 00:24:55.623 "large_cache_size": 16, 00:24:55.623 "task_count": 2048, 00:24:55.623 "sequence_count": 2048, 00:24:55.623 "buf_count": 2048 00:24:55.623 } 00:24:55.623 } 00:24:55.623 ] 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "subsystem": "bdev", 00:24:55.623 "config": [ 00:24:55.623 { 00:24:55.623 "method": "bdev_set_options", 00:24:55.623 "params": { 00:24:55.623 "bdev_io_pool_size": 65535, 00:24:55.623 "bdev_io_cache_size": 256, 00:24:55.623 "bdev_auto_examine": true, 00:24:55.623 "iobuf_small_cache_size": 128, 00:24:55.623 "iobuf_large_cache_size": 16 00:24:55.623 } 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "method": "bdev_raid_set_options", 00:24:55.623 "params": { 00:24:55.623 "process_window_size_kb": 1024 00:24:55.623 } 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "method": "bdev_iscsi_set_options", 00:24:55.623 "params": { 00:24:55.623 "timeout_sec": 30 00:24:55.623 } 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "method": "bdev_nvme_set_options", 00:24:55.623 "params": { 00:24:55.623 "action_on_timeout": "none", 00:24:55.623 "timeout_us": 0, 00:24:55.623 "timeout_admin_us": 0, 00:24:55.623 "keep_alive_timeout_ms": 10000, 00:24:55.623 "arbitration_burst": 0, 00:24:55.623 "low_priority_weight": 0, 00:24:55.623 "medium_priority_weight": 0, 00:24:55.623 "high_priority_weight": 0, 00:24:55.623 "nvme_adminq_poll_period_us": 10000, 00:24:55.623 "nvme_ioq_poll_period_us": 0, 00:24:55.623 "io_queue_requests": 0, 00:24:55.623 "delay_cmd_submit": true, 00:24:55.623 "transport_retry_count": 4, 00:24:55.623 "bdev_retry_count": 3, 00:24:55.623 "transport_ack_timeout": 0, 00:24:55.623 "ctrlr_loss_timeout_sec": 0, 00:24:55.623 "reconnect_delay_sec": 0, 00:24:55.623 "fast_io_fail_timeout_sec": 0, 00:24:55.623 "disable_auto_failback": false, 00:24:55.623 "generate_uuids": false, 00:24:55.623 "transport_tos": 0, 00:24:55.623 "nvme_error_stat": false, 00:24:55.623 "rdma_srq_size": 0, 00:24:55.623 "io_path_stat": false, 00:24:55.623 "allow_accel_sequence": false, 00:24:55.623 "rdma_max_cq_size": 0, 00:24:55.623 "rdma_cm_event_timeout_ms": 0, 00:24:55.623 "dhchap_digests": [ 00:24:55.623 "sha256", 00:24:55.623 "sha384", 00:24:55.623 "sha512" 00:24:55.623 ], 00:24:55.623 "dhchap_dhgroups": [ 00:24:55.623 "null", 00:24:55.623 "ffdhe2048", 00:24:55.623 "ffdhe3072", 00:24:55.623 "ffdhe4096", 00:24:55.623 "ffdhe6144", 00:24:55.623 "ffdhe8192" 00:24:55.623 ] 00:24:55.623 } 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "method": "bdev_nvme_set_hotplug", 00:24:55.623 "params": { 00:24:55.623 "period_us": 100000, 00:24:55.623 "enable": false 00:24:55.623 } 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "method": "bdev_malloc_create", 00:24:55.623 "params": { 00:24:55.623 "name": "malloc0", 00:24:55.623 "num_blocks": 8192, 00:24:55.623 "block_size": 4096, 00:24:55.623 "physical_block_size": 4096, 00:24:55.623 "uuid": "7cd619bf-4651-46d9-9eee-bd35cbe04c83", 00:24:55.623 "optimal_io_boundary": 0 00:24:55.623 } 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "method": "bdev_wait_for_examine" 00:24:55.623 } 00:24:55.623 ] 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "subsystem": "nbd", 00:24:55.623 "config": [] 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "subsystem": "scheduler", 00:24:55.623 "config": [ 00:24:55.623 { 00:24:55.623 "method": "framework_set_scheduler", 00:24:55.623 "params": { 00:24:55.623 "name": "static" 00:24:55.623 } 00:24:55.623 } 00:24:55.623 ] 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "subsystem": "nvmf", 00:24:55.623 "config": [ 00:24:55.623 { 00:24:55.623 "method": "nvmf_set_config", 00:24:55.623 "params": { 00:24:55.623 "discovery_filter": "match_any", 00:24:55.623 "admin_cmd_passthru": { 00:24:55.623 "identify_ctrlr": false 00:24:55.623 } 00:24:55.623 } 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "method": "nvmf_set_max_subsystems", 00:24:55.623 "params": { 00:24:55.623 "max_subsystems": 1024 00:24:55.623 } 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "method": "nvmf_set_crdt", 00:24:55.623 "params": { 00:24:55.623 "crdt1": 0, 00:24:55.623 "crdt2": 0, 00:24:55.623 "crdt3": 0 00:24:55.623 } 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "method": "nvmf_create_transport", 00:24:55.623 "params": { 00:24:55.623 "trtype": "TCP", 00:24:55.623 "max_queue_depth": 128, 00:24:55.623 "max_io_qpairs_per_ctrlr": 127, 00:24:55.623 "in_capsule_data_size": 4096, 00:24:55.623 "max_io_size": 131072, 00:24:55.623 "io_unit_size": 131072, 00:24:55.623 "max_aq_depth": 128, 00:24:55.623 "num_shared_buffers": 511, 00:24:55.623 "buf_cache_size": 4294967295, 00:24:55.623 "dif_insert_or_strip": false, 00:24:55.623 "zcopy": false, 00:24:55.623 "c2h_success": false, 00:24:55.623 "sock_priority": 0, 00:24:55.623 "abort_timeout_sec": 1, 00:24:55.623 "ack_timeout": 0, 00:24:55.623 "data_wr_pool_size": 0 00:24:55.623 } 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "method": "nvmf_create_subsystem", 00:24:55.623 "params": { 00:24:55.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.623 "allow_any_host": false, 00:24:55.623 "serial_number": "SPDK00000000000001", 00:24:55.623 "model_number": "SPDK bdev Controller", 00:24:55.623 "max_namespaces": 10, 00:24:55.623 "min_cntlid": 1, 00:24:55.623 "max_cntlid": 65519, 00:24:55.623 "ana_reporting": false 00:24:55.623 } 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "method": "nvmf_subsystem_add_host", 00:24:55.623 "params": { 00:24:55.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.623 "host": "nqn.2016-06.io.spdk:host1", 00:24:55.623 "psk": "/tmp/tmp.PDlaRDnbEz" 00:24:55.623 } 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "method": "nvmf_subsystem_add_ns", 00:24:55.623 "params": { 00:24:55.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.623 "namespace": { 00:24:55.623 "nsid": 1, 00:24:55.623 "bdev_name": "malloc0", 00:24:55.623 "nguid": "7CD619BF465146D99EEEBD35CBE04C83", 00:24:55.623 "uuid": "7cd619bf-4651-46d9-9eee-bd35cbe04c83", 00:24:55.623 "no_auto_visible": false 00:24:55.623 } 00:24:55.623 } 00:24:55.623 }, 00:24:55.623 { 00:24:55.623 "method": "nvmf_subsystem_add_listener", 00:24:55.623 "params": { 00:24:55.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.623 "listen_address": { 00:24:55.623 "trtype": "TCP", 00:24:55.623 "adrfam": "IPv4", 00:24:55.623 "traddr": "10.0.0.2", 00:24:55.623 "trsvcid": "4420" 00:24:55.623 }, 00:24:55.623 "secure_channel": true 00:24:55.623 } 00:24:55.623 } 00:24:55.623 ] 00:24:55.623 } 00:24:55.623 ] 00:24:55.623 }' 00:24:55.623 23:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:55.623 23:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2485639 00:24:55.623 23:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2485639 00:24:55.623 23:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:55.623 23:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2485639 ']' 00:24:55.623 23:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.623 23:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:55.623 23:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.623 23:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:55.623 23:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:55.623 [2024-07-10 23:29:04.425994] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:55.623 [2024-07-10 23:29:04.426088] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.623 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.624 [2024-07-10 23:29:04.533458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.883 [2024-07-10 23:29:04.738926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.883 [2024-07-10 23:29:04.738975] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.883 [2024-07-10 23:29:04.738987] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.883 [2024-07-10 23:29:04.738998] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.883 [2024-07-10 23:29:04.739008] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.883 [2024-07-10 23:29:04.739105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.452 [2024-07-10 23:29:05.275513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.452 [2024-07-10 23:29:05.291500] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:56.452 [2024-07-10 23:29:05.307558] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:56.452 [2024-07-10 23:29:05.307782] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.452 23:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:56.452 23:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:56.452 23:29:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:56.452 23:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:56.452 23:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.452 23:29:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.452 23:29:05 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2485784 00:24:56.452 23:29:05 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2485784 /var/tmp/bdevperf.sock 00:24:56.452 23:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2485784 ']' 00:24:56.452 23:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.452 23:29:05 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:56.452 23:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.452 23:29:05 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:24:56.452 "subsystems": [ 00:24:56.452 { 00:24:56.452 "subsystem": "keyring", 00:24:56.452 "config": [] 00:24:56.452 }, 00:24:56.452 { 00:24:56.452 "subsystem": "iobuf", 00:24:56.452 "config": [ 00:24:56.452 { 00:24:56.452 "method": "iobuf_set_options", 00:24:56.452 "params": { 00:24:56.452 "small_pool_count": 8192, 00:24:56.452 "large_pool_count": 1024, 00:24:56.452 "small_bufsize": 8192, 00:24:56.452 "large_bufsize": 135168 00:24:56.452 } 00:24:56.452 } 00:24:56.452 ] 00:24:56.452 }, 00:24:56.452 { 00:24:56.452 "subsystem": "sock", 00:24:56.452 "config": [ 00:24:56.452 { 00:24:56.452 "method": "sock_set_default_impl", 00:24:56.452 "params": { 00:24:56.452 "impl_name": "posix" 00:24:56.452 } 00:24:56.452 }, 00:24:56.452 { 00:24:56.452 "method": "sock_impl_set_options", 00:24:56.452 "params": { 00:24:56.452 "impl_name": "ssl", 00:24:56.452 "recv_buf_size": 4096, 00:24:56.452 "send_buf_size": 4096, 00:24:56.452 "enable_recv_pipe": true, 00:24:56.452 "enable_quickack": false, 00:24:56.452 "enable_placement_id": 0, 00:24:56.452 "enable_zerocopy_send_server": true, 00:24:56.452 "enable_zerocopy_send_client": false, 00:24:56.452 "zerocopy_threshold": 0, 00:24:56.452 "tls_version": 0, 00:24:56.452 "enable_ktls": false 00:24:56.452 } 00:24:56.452 }, 00:24:56.452 { 00:24:56.452 "method": "sock_impl_set_options", 00:24:56.452 "params": { 00:24:56.452 "impl_name": "posix", 00:24:56.452 "recv_buf_size": 2097152, 00:24:56.452 "send_buf_size": 2097152, 00:24:56.452 "enable_recv_pipe": true, 00:24:56.452 "enable_quickack": false, 00:24:56.452 "enable_placement_id": 0, 00:24:56.452 "enable_zerocopy_send_server": true, 00:24:56.452 "enable_zerocopy_send_client": false, 00:24:56.452 "zerocopy_threshold": 0, 00:24:56.452 "tls_version": 0, 00:24:56.452 "enable_ktls": false 00:24:56.452 } 00:24:56.452 } 00:24:56.452 ] 00:24:56.452 }, 00:24:56.452 { 00:24:56.452 "subsystem": "vmd", 00:24:56.452 "config": [] 00:24:56.452 }, 00:24:56.452 { 00:24:56.452 "subsystem": "accel", 00:24:56.452 "config": [ 00:24:56.452 { 00:24:56.452 "method": "accel_set_options", 00:24:56.452 "params": { 00:24:56.452 "small_cache_size": 128, 00:24:56.452 "large_cache_size": 16, 00:24:56.452 "task_count": 2048, 00:24:56.452 "sequence_count": 2048, 00:24:56.452 "buf_count": 2048 00:24:56.452 } 00:24:56.452 } 00:24:56.452 ] 00:24:56.452 }, 00:24:56.452 { 00:24:56.452 "subsystem": "bdev", 00:24:56.452 "config": [ 00:24:56.452 { 00:24:56.452 "method": "bdev_set_options", 00:24:56.452 "params": { 00:24:56.452 "bdev_io_pool_size": 65535, 00:24:56.452 "bdev_io_cache_size": 256, 00:24:56.452 "bdev_auto_examine": true, 00:24:56.452 "iobuf_small_cache_size": 128, 00:24:56.452 "iobuf_large_cache_size": 16 00:24:56.452 } 00:24:56.452 }, 00:24:56.452 { 00:24:56.452 "method": "bdev_raid_set_options", 00:24:56.452 "params": { 00:24:56.452 "process_window_size_kb": 1024 00:24:56.452 } 00:24:56.452 }, 00:24:56.452 { 00:24:56.452 "method": "bdev_iscsi_set_options", 00:24:56.452 "params": { 00:24:56.452 "timeout_sec": 30 00:24:56.452 } 00:24:56.452 }, 00:24:56.452 { 00:24:56.452 "method": "bdev_nvme_set_options", 00:24:56.452 "params": { 00:24:56.452 "action_on_timeout": "none", 00:24:56.452 "timeout_us": 0, 00:24:56.452 "timeout_admin_us": 0, 00:24:56.452 "keep_alive_timeout_ms": 10000, 00:24:56.452 "arbitration_burst": 0, 00:24:56.452 "low_priority_weight": 0, 00:24:56.452 "medium_priority_weight": 0, 00:24:56.452 "high_priority_weight": 0, 00:24:56.452 "nvme_adminq_poll_period_us": 10000, 00:24:56.452 "nvme_ioq_poll_period_us": 0, 00:24:56.452 "io_queue_requests": 512, 00:24:56.452 "delay_cmd_submit": true, 00:24:56.452 "transport_retry_count": 4, 00:24:56.452 "bdev_retry_count": 3, 00:24:56.452 "transport_ack_timeout": 0, 00:24:56.452 "ctrlr_loss_timeout_sec": 0, 00:24:56.452 "reconnect_delay_sec": 0, 00:24:56.452 "fast_io_fail_timeout_sec": 0, 00:24:56.452 "disable_auto_failback": false, 00:24:56.452 "generate_uuids": false, 00:24:56.452 "transport_tos": 0, 00:24:56.452 "nvme_error_stat": false, 00:24:56.452 "rdma_srq_size": 0, 00:24:56.452 "io_path_stat": false, 00:24:56.452 "allow_accel_sequence": false, 00:24:56.452 "rdma_max_cq_size": 0, 00:24:56.452 "rdma_cm_event_timeout_ms": 0, 00:24:56.452 "dhchap_digests": [ 00:24:56.452 "sha256", 00:24:56.452 "sha384", 00:24:56.452 "sha512" 00:24:56.452 ], 00:24:56.452 "dhchap_dhgroups": [ 00:24:56.452 "null", 00:24:56.452 "ffdhe2048", 00:24:56.452 "ffdhe3072", 00:24:56.452 "ffdhe4096", 00:24:56.452 "ffd 23:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.452 he6144", 00:24:56.452 "ffdhe8192" 00:24:56.452 ] 00:24:56.452 } 00:24:56.452 }, 00:24:56.452 { 00:24:56.452 "method": "bdev_nvme_attach_controller", 00:24:56.452 "params": { 00:24:56.452 "name": "TLSTEST", 00:24:56.452 "trtype": "TCP", 00:24:56.452 "adrfam": "IPv4", 00:24:56.452 "traddr": "10.0.0.2", 00:24:56.452 "trsvcid": "4420", 00:24:56.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.452 "prchk_reftag": false, 00:24:56.452 "prchk_guard": false, 00:24:56.452 "ctrlr_loss_timeout_sec": 0, 00:24:56.452 "reconnect_delay_sec": 0, 00:24:56.452 "fast_io_fail_timeout_sec": 0, 00:24:56.452 "psk": "/tmp/tmp.PDlaRDnbEz", 00:24:56.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:56.452 "hdgst": false, 00:24:56.452 "ddgst": false 00:24:56.452 } 00:24:56.452 }, 00:24:56.452 { 00:24:56.452 "method": "bdev_nvme_set_hotplug", 00:24:56.452 "params": { 00:24:56.452 "period_us": 100000, 00:24:56.452 "enable": false 00:24:56.452 } 00:24:56.452 }, 00:24:56.452 { 00:24:56.452 "method": "bdev_wait_for_examine" 00:24:56.452 } 00:24:56.452 ] 00:24:56.452 }, 00:24:56.452 { 00:24:56.452 "subsystem": "nbd", 00:24:56.452 "config": [] 00:24:56.452 } 00:24:56.452 ] 00:24:56.452 }' 00:24:56.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.453 23:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.453 23:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.453 [2024-07-10 23:29:05.445042] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:56.453 [2024-07-10 23:29:05.445158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485784 ] 00:24:56.453 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.712 [2024-07-10 23:29:05.545225] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.712 [2024-07-10 23:29:05.762898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.279 [2024-07-10 23:29:06.206630] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:57.279 [2024-07-10 23:29:06.206737] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:57.537 23:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.537 23:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:57.537 23:29:06 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:57.537 Running I/O for 10 seconds... 00:25:07.521 00:25:07.521 Latency(us) 00:25:07.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.521 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:07.521 Verification LBA range: start 0x0 length 0x2000 00:25:07.521 TLSTESTn1 : 10.04 3575.90 13.97 0.00 0.00 35722.53 7636.37 54252.41 00:25:07.521 =================================================================================================================== 00:25:07.521 Total : 3575.90 13.97 0.00 0.00 35722.53 7636.37 54252.41 00:25:07.521 0 00:25:07.521 23:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:07.521 23:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2485784 00:25:07.521 23:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2485784 ']' 00:25:07.521 23:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2485784 00:25:07.521 23:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:07.521 23:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:07.521 23:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2485784 00:25:07.521 23:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:07.521 23:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:07.521 23:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2485784' 00:25:07.521 killing process with pid 2485784 00:25:07.521 23:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2485784 00:25:07.521 Received shutdown signal, test time was about 10.000000 seconds 00:25:07.521 00:25:07.521 Latency(us) 00:25:07.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.521 =================================================================================================================== 00:25:07.521 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.521 [2024-07-10 23:29:16.566685] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:07.521 23:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2485784 00:25:08.900 23:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2485639 00:25:08.900 23:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2485639 ']' 00:25:08.900 23:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2485639 00:25:08.900 23:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:08.900 23:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:08.900 23:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2485639 00:25:08.900 23:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:08.901 23:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:08.901 23:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2485639' 00:25:08.901 killing process with pid 2485639 00:25:08.901 23:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2485639 00:25:08.901 [2024-07-10 23:29:17.673492] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:08.901 23:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2485639 00:25:10.279 23:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:25:10.279 23:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:10.279 23:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:10.279 23:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:10.279 23:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2487967 00:25:10.279 23:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:10.279 23:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2487967 00:25:10.279 23:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2487967 ']' 00:25:10.279 23:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.279 23:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:10.279 23:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.279 23:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:10.279 23:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:10.279 [2024-07-10 23:29:19.160303] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:25:10.279 [2024-07-10 23:29:19.160391] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.279 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.279 [2024-07-10 23:29:19.267039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.538 [2024-07-10 23:29:19.489957] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.538 [2024-07-10 23:29:19.489997] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.538 [2024-07-10 23:29:19.490011] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.538 [2024-07-10 23:29:19.490023] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.538 [2024-07-10 23:29:19.490034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.538 [2024-07-10 23:29:19.490066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.104 23:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:11.104 23:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:11.104 23:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:11.104 23:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:11.104 23:29:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:11.104 23:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.104 23:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.PDlaRDnbEz 00:25:11.104 23:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PDlaRDnbEz 00:25:11.104 23:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:11.104 [2024-07-10 23:29:20.124247] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.104 23:29:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:11.363 23:29:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:11.623 [2024-07-10 23:29:20.469177] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:11.623 [2024-07-10 23:29:20.469426] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.623 23:29:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:11.882 malloc0 00:25:11.882 23:29:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:11.882 23:29:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PDlaRDnbEz 00:25:12.142 [2024-07-10 23:29:21.021743] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:12.142 23:29:21 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2488411 00:25:12.142 23:29:21 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:12.142 23:29:21 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:12.142 23:29:21 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2488411 /var/tmp/bdevperf.sock 00:25:12.142 23:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2488411 ']' 00:25:12.142 23:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:12.142 23:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:12.142 23:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:12.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:12.142 23:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:12.142 23:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.142 [2024-07-10 23:29:21.107845] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:25:12.142 [2024-07-10 23:29:21.107951] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488411 ] 00:25:12.142 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.142 [2024-07-10 23:29:21.208352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.400 [2024-07-10 23:29:21.429294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.967 23:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:12.967 23:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:12.967 23:29:21 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PDlaRDnbEz 00:25:13.225 23:29:22 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:13.225 [2024-07-10 23:29:22.185703] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:13.225 nvme0n1 00:25:13.225 23:29:22 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:13.484 Running I/O for 1 seconds... 00:25:14.423 00:25:14.423 Latency(us) 00:25:14.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.423 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:14.423 Verification LBA range: start 0x0 length 0x2000 00:25:14.423 nvme0n1 : 1.02 4538.46 17.73 0.00 0.00 27958.13 6724.56 35788.35 00:25:14.423 =================================================================================================================== 00:25:14.423 Total : 4538.46 17.73 0.00 0.00 27958.13 6724.56 35788.35 00:25:14.423 0 00:25:14.423 23:29:23 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2488411 00:25:14.423 23:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2488411 ']' 00:25:14.423 23:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2488411 00:25:14.423 23:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:14.423 23:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:14.423 23:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2488411 00:25:14.423 23:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:14.423 23:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:14.423 23:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2488411' 00:25:14.423 killing process with pid 2488411 00:25:14.423 23:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2488411 00:25:14.423 Received shutdown signal, test time was about 1.000000 seconds 00:25:14.423 00:25:14.423 Latency(us) 00:25:14.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.423 =================================================================================================================== 00:25:14.423 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:14.423 23:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2488411 00:25:15.809 23:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2487967 00:25:15.809 23:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2487967 ']' 00:25:15.809 23:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2487967 00:25:15.809 23:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:15.809 23:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:15.809 23:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2487967 00:25:15.809 23:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:15.809 23:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:15.809 23:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2487967' 00:25:15.809 killing process with pid 2487967 00:25:15.809 23:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2487967 00:25:15.809 [2024-07-10 23:29:24.544366] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:15.809 23:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2487967 00:25:17.186 23:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:25:17.186 23:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:17.186 23:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:17.186 23:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:17.186 23:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2489157 00:25:17.186 23:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:17.186 23:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2489157 00:25:17.186 23:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2489157 ']' 00:25:17.186 23:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.186 23:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:17.186 23:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.186 23:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:17.186 23:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:17.186 [2024-07-10 23:29:25.988318] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:25:17.186 [2024-07-10 23:29:25.988398] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.186 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.186 [2024-07-10 23:29:26.095950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.445 [2024-07-10 23:29:26.300078] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.445 [2024-07-10 23:29:26.300124] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.445 [2024-07-10 23:29:26.300135] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.445 [2024-07-10 23:29:26.300146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.445 [2024-07-10 23:29:26.300155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.445 [2024-07-10 23:29:26.300188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.705 23:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:17.705 23:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:17.705 23:29:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:17.705 23:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:17.705 23:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:18.003 23:29:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.004 23:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:25:18.004 23:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.004 23:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:18.004 [2024-07-10 23:29:26.796181] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.004 malloc0 00:25:18.004 [2024-07-10 23:29:26.870588] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:18.004 [2024-07-10 23:29:26.870834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.004 23:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.004 23:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2489400 00:25:18.004 23:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2489400 /var/tmp/bdevperf.sock 00:25:18.004 23:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:18.004 23:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2489400 ']' 00:25:18.004 23:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:18.004 23:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.004 23:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:18.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:18.004 23:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.004 23:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:18.004 [2024-07-10 23:29:26.969427] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:25:18.004 [2024-07-10 23:29:26.969509] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2489400 ] 00:25:18.004 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.264 [2024-07-10 23:29:27.072428] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.264 [2024-07-10 23:29:27.299005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.833 23:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:18.833 23:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:18.833 23:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PDlaRDnbEz 00:25:19.091 23:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:19.091 [2024-07-10 23:29:28.079691] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:19.092 nvme0n1 00:25:19.351 23:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:19.351 Running I/O for 1 seconds... 00:25:20.288 00:25:20.288 Latency(us) 00:25:20.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.288 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:20.288 Verification LBA range: start 0x0 length 0x2000 00:25:20.288 nvme0n1 : 1.03 4407.27 17.22 0.00 0.00 28713.16 7351.43 53340.61 00:25:20.288 =================================================================================================================== 00:25:20.288 Total : 4407.27 17.22 0.00 0.00 28713.16 7351.43 53340.61 00:25:20.288 0 00:25:20.288 23:29:29 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:25:20.288 23:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.288 23:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:20.548 23:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.548 23:29:29 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:25:20.548 "subsystems": [ 00:25:20.548 { 00:25:20.548 "subsystem": "keyring", 00:25:20.548 "config": [ 00:25:20.548 { 00:25:20.548 "method": "keyring_file_add_key", 00:25:20.548 "params": { 00:25:20.548 "name": "key0", 00:25:20.548 "path": "/tmp/tmp.PDlaRDnbEz" 00:25:20.548 } 00:25:20.548 } 00:25:20.548 ] 00:25:20.548 }, 00:25:20.548 { 00:25:20.548 "subsystem": "iobuf", 00:25:20.548 "config": [ 00:25:20.548 { 00:25:20.548 "method": "iobuf_set_options", 00:25:20.548 "params": { 00:25:20.548 "small_pool_count": 8192, 00:25:20.548 "large_pool_count": 1024, 00:25:20.548 "small_bufsize": 8192, 00:25:20.548 "large_bufsize": 135168 00:25:20.548 } 00:25:20.548 } 00:25:20.548 ] 00:25:20.548 }, 00:25:20.548 { 00:25:20.548 "subsystem": "sock", 00:25:20.548 "config": [ 00:25:20.548 { 00:25:20.548 "method": "sock_set_default_impl", 00:25:20.548 "params": { 00:25:20.548 "impl_name": "posix" 00:25:20.548 } 00:25:20.548 }, 00:25:20.548 { 00:25:20.548 "method": "sock_impl_set_options", 00:25:20.548 "params": { 00:25:20.548 "impl_name": "ssl", 00:25:20.548 "recv_buf_size": 4096, 00:25:20.548 "send_buf_size": 4096, 00:25:20.548 "enable_recv_pipe": true, 00:25:20.548 "enable_quickack": false, 00:25:20.548 "enable_placement_id": 0, 00:25:20.548 "enable_zerocopy_send_server": true, 00:25:20.548 "enable_zerocopy_send_client": false, 00:25:20.548 "zerocopy_threshold": 0, 00:25:20.548 "tls_version": 0, 00:25:20.548 "enable_ktls": false 00:25:20.548 } 00:25:20.548 }, 00:25:20.548 { 00:25:20.548 "method": "sock_impl_set_options", 00:25:20.548 "params": { 00:25:20.548 "impl_name": "posix", 00:25:20.548 "recv_buf_size": 2097152, 00:25:20.548 "send_buf_size": 2097152, 00:25:20.548 "enable_recv_pipe": true, 00:25:20.548 "enable_quickack": false, 00:25:20.548 "enable_placement_id": 0, 00:25:20.548 "enable_zerocopy_send_server": true, 00:25:20.548 "enable_zerocopy_send_client": false, 00:25:20.548 "zerocopy_threshold": 0, 00:25:20.548 "tls_version": 0, 00:25:20.548 "enable_ktls": false 00:25:20.548 } 00:25:20.548 } 00:25:20.548 ] 00:25:20.548 }, 00:25:20.548 { 00:25:20.548 "subsystem": "vmd", 00:25:20.548 "config": [] 00:25:20.548 }, 00:25:20.548 { 00:25:20.548 "subsystem": "accel", 00:25:20.548 "config": [ 00:25:20.548 { 00:25:20.548 "method": "accel_set_options", 00:25:20.548 "params": { 00:25:20.548 "small_cache_size": 128, 00:25:20.548 "large_cache_size": 16, 00:25:20.548 "task_count": 2048, 00:25:20.548 "sequence_count": 2048, 00:25:20.548 "buf_count": 2048 00:25:20.548 } 00:25:20.548 } 00:25:20.548 ] 00:25:20.548 }, 00:25:20.548 { 00:25:20.548 "subsystem": "bdev", 00:25:20.548 "config": [ 00:25:20.548 { 00:25:20.548 "method": "bdev_set_options", 00:25:20.548 "params": { 00:25:20.548 "bdev_io_pool_size": 65535, 00:25:20.548 "bdev_io_cache_size": 256, 00:25:20.548 "bdev_auto_examine": true, 00:25:20.548 "iobuf_small_cache_size": 128, 00:25:20.548 "iobuf_large_cache_size": 16 00:25:20.548 } 00:25:20.548 }, 00:25:20.548 { 00:25:20.548 "method": "bdev_raid_set_options", 00:25:20.548 "params": { 00:25:20.548 "process_window_size_kb": 1024 00:25:20.548 } 00:25:20.548 }, 00:25:20.548 { 00:25:20.548 "method": "bdev_iscsi_set_options", 00:25:20.548 "params": { 00:25:20.548 "timeout_sec": 30 00:25:20.548 } 00:25:20.548 }, 00:25:20.548 { 00:25:20.548 "method": "bdev_nvme_set_options", 00:25:20.548 "params": { 00:25:20.548 "action_on_timeout": "none", 00:25:20.548 "timeout_us": 0, 00:25:20.548 "timeout_admin_us": 0, 00:25:20.548 "keep_alive_timeout_ms": 10000, 00:25:20.548 "arbitration_burst": 0, 00:25:20.548 "low_priority_weight": 0, 00:25:20.548 "medium_priority_weight": 0, 00:25:20.548 "high_priority_weight": 0, 00:25:20.548 "nvme_adminq_poll_period_us": 10000, 00:25:20.548 "nvme_ioq_poll_period_us": 0, 00:25:20.548 "io_queue_requests": 0, 00:25:20.548 "delay_cmd_submit": true, 00:25:20.548 "transport_retry_count": 4, 00:25:20.549 "bdev_retry_count": 3, 00:25:20.549 "transport_ack_timeout": 0, 00:25:20.549 "ctrlr_loss_timeout_sec": 0, 00:25:20.549 "reconnect_delay_sec": 0, 00:25:20.549 "fast_io_fail_timeout_sec": 0, 00:25:20.549 "disable_auto_failback": false, 00:25:20.549 "generate_uuids": false, 00:25:20.549 "transport_tos": 0, 00:25:20.549 "nvme_error_stat": false, 00:25:20.549 "rdma_srq_size": 0, 00:25:20.549 "io_path_stat": false, 00:25:20.549 "allow_accel_sequence": false, 00:25:20.549 "rdma_max_cq_size": 0, 00:25:20.549 "rdma_cm_event_timeout_ms": 0, 00:25:20.549 "dhchap_digests": [ 00:25:20.549 "sha256", 00:25:20.549 "sha384", 00:25:20.549 "sha512" 00:25:20.549 ], 00:25:20.549 "dhchap_dhgroups": [ 00:25:20.549 "null", 00:25:20.549 "ffdhe2048", 00:25:20.549 "ffdhe3072", 00:25:20.549 "ffdhe4096", 00:25:20.549 "ffdhe6144", 00:25:20.549 "ffdhe8192" 00:25:20.549 ] 00:25:20.549 } 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "method": "bdev_nvme_set_hotplug", 00:25:20.549 "params": { 00:25:20.549 "period_us": 100000, 00:25:20.549 "enable": false 00:25:20.549 } 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "method": "bdev_malloc_create", 00:25:20.549 "params": { 00:25:20.549 "name": "malloc0", 00:25:20.549 "num_blocks": 8192, 00:25:20.549 "block_size": 4096, 00:25:20.549 "physical_block_size": 4096, 00:25:20.549 "uuid": "5416e6dd-2a73-4e96-bda3-a57f40458604", 00:25:20.549 "optimal_io_boundary": 0 00:25:20.549 } 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "method": "bdev_wait_for_examine" 00:25:20.549 } 00:25:20.549 ] 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "subsystem": "nbd", 00:25:20.549 "config": [] 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "subsystem": "scheduler", 00:25:20.549 "config": [ 00:25:20.549 { 00:25:20.549 "method": "framework_set_scheduler", 00:25:20.549 "params": { 00:25:20.549 "name": "static" 00:25:20.549 } 00:25:20.549 } 00:25:20.549 ] 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "subsystem": "nvmf", 00:25:20.549 "config": [ 00:25:20.549 { 00:25:20.549 "method": "nvmf_set_config", 00:25:20.549 "params": { 00:25:20.549 "discovery_filter": "match_any", 00:25:20.549 "admin_cmd_passthru": { 00:25:20.549 "identify_ctrlr": false 00:25:20.549 } 00:25:20.549 } 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "method": "nvmf_set_max_subsystems", 00:25:20.549 "params": { 00:25:20.549 "max_subsystems": 1024 00:25:20.549 } 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "method": "nvmf_set_crdt", 00:25:20.549 "params": { 00:25:20.549 "crdt1": 0, 00:25:20.549 "crdt2": 0, 00:25:20.549 "crdt3": 0 00:25:20.549 } 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "method": "nvmf_create_transport", 00:25:20.549 "params": { 00:25:20.549 "trtype": "TCP", 00:25:20.549 "max_queue_depth": 128, 00:25:20.549 "max_io_qpairs_per_ctrlr": 127, 00:25:20.549 "in_capsule_data_size": 4096, 00:25:20.549 "max_io_size": 131072, 00:25:20.549 "io_unit_size": 131072, 00:25:20.549 "max_aq_depth": 128, 00:25:20.549 "num_shared_buffers": 511, 00:25:20.549 "buf_cache_size": 4294967295, 00:25:20.549 "dif_insert_or_strip": false, 00:25:20.549 "zcopy": false, 00:25:20.549 "c2h_success": false, 00:25:20.549 "sock_priority": 0, 00:25:20.549 "abort_timeout_sec": 1, 00:25:20.549 "ack_timeout": 0, 00:25:20.549 "data_wr_pool_size": 0 00:25:20.549 } 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "method": "nvmf_create_subsystem", 00:25:20.549 "params": { 00:25:20.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.549 "allow_any_host": false, 00:25:20.549 "serial_number": "00000000000000000000", 00:25:20.549 "model_number": "SPDK bdev Controller", 00:25:20.549 "max_namespaces": 32, 00:25:20.549 "min_cntlid": 1, 00:25:20.549 "max_cntlid": 65519, 00:25:20.549 "ana_reporting": false 00:25:20.549 } 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "method": "nvmf_subsystem_add_host", 00:25:20.549 "params": { 00:25:20.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.549 "host": "nqn.2016-06.io.spdk:host1", 00:25:20.549 "psk": "key0" 00:25:20.549 } 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "method": "nvmf_subsystem_add_ns", 00:25:20.549 "params": { 00:25:20.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.549 "namespace": { 00:25:20.549 "nsid": 1, 00:25:20.549 "bdev_name": "malloc0", 00:25:20.549 "nguid": "5416E6DD2A734E96BDA3A57F40458604", 00:25:20.549 "uuid": "5416e6dd-2a73-4e96-bda3-a57f40458604", 00:25:20.549 "no_auto_visible": false 00:25:20.549 } 00:25:20.549 } 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "method": "nvmf_subsystem_add_listener", 00:25:20.549 "params": { 00:25:20.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.549 "listen_address": { 00:25:20.549 "trtype": "TCP", 00:25:20.549 "adrfam": "IPv4", 00:25:20.549 "traddr": "10.0.0.2", 00:25:20.549 "trsvcid": "4420" 00:25:20.549 }, 00:25:20.549 "secure_channel": true 00:25:20.549 } 00:25:20.549 } 00:25:20.549 ] 00:25:20.549 } 00:25:20.549 ] 00:25:20.549 }' 00:25:20.549 23:29:29 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:20.809 23:29:29 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:25:20.809 "subsystems": [ 00:25:20.809 { 00:25:20.809 "subsystem": "keyring", 00:25:20.809 "config": [ 00:25:20.809 { 00:25:20.809 "method": "keyring_file_add_key", 00:25:20.809 "params": { 00:25:20.809 "name": "key0", 00:25:20.809 "path": "/tmp/tmp.PDlaRDnbEz" 00:25:20.809 } 00:25:20.809 } 00:25:20.809 ] 00:25:20.809 }, 00:25:20.809 { 00:25:20.809 "subsystem": "iobuf", 00:25:20.809 "config": [ 00:25:20.809 { 00:25:20.809 "method": "iobuf_set_options", 00:25:20.809 "params": { 00:25:20.809 "small_pool_count": 8192, 00:25:20.809 "large_pool_count": 1024, 00:25:20.809 "small_bufsize": 8192, 00:25:20.809 "large_bufsize": 135168 00:25:20.809 } 00:25:20.809 } 00:25:20.809 ] 00:25:20.809 }, 00:25:20.809 { 00:25:20.809 "subsystem": "sock", 00:25:20.809 "config": [ 00:25:20.809 { 00:25:20.809 "method": "sock_set_default_impl", 00:25:20.809 "params": { 00:25:20.809 "impl_name": "posix" 00:25:20.809 } 00:25:20.809 }, 00:25:20.809 { 00:25:20.809 "method": "sock_impl_set_options", 00:25:20.809 "params": { 00:25:20.809 "impl_name": "ssl", 00:25:20.809 "recv_buf_size": 4096, 00:25:20.809 "send_buf_size": 4096, 00:25:20.809 "enable_recv_pipe": true, 00:25:20.809 "enable_quickack": false, 00:25:20.809 "enable_placement_id": 0, 00:25:20.809 "enable_zerocopy_send_server": true, 00:25:20.809 "enable_zerocopy_send_client": false, 00:25:20.809 "zerocopy_threshold": 0, 00:25:20.809 "tls_version": 0, 00:25:20.809 "enable_ktls": false 00:25:20.809 } 00:25:20.809 }, 00:25:20.809 { 00:25:20.809 "method": "sock_impl_set_options", 00:25:20.809 "params": { 00:25:20.809 "impl_name": "posix", 00:25:20.809 "recv_buf_size": 2097152, 00:25:20.809 "send_buf_size": 2097152, 00:25:20.809 "enable_recv_pipe": true, 00:25:20.809 "enable_quickack": false, 00:25:20.809 "enable_placement_id": 0, 00:25:20.809 "enable_zerocopy_send_server": true, 00:25:20.809 "enable_zerocopy_send_client": false, 00:25:20.809 "zerocopy_threshold": 0, 00:25:20.809 "tls_version": 0, 00:25:20.809 "enable_ktls": false 00:25:20.809 } 00:25:20.809 } 00:25:20.809 ] 00:25:20.809 }, 00:25:20.809 { 00:25:20.809 "subsystem": "vmd", 00:25:20.809 "config": [] 00:25:20.809 }, 00:25:20.809 { 00:25:20.809 "subsystem": "accel", 00:25:20.809 "config": [ 00:25:20.809 { 00:25:20.809 "method": "accel_set_options", 00:25:20.809 "params": { 00:25:20.809 "small_cache_size": 128, 00:25:20.809 "large_cache_size": 16, 00:25:20.809 "task_count": 2048, 00:25:20.809 "sequence_count": 2048, 00:25:20.809 "buf_count": 2048 00:25:20.809 } 00:25:20.809 } 00:25:20.809 ] 00:25:20.809 }, 00:25:20.809 { 00:25:20.809 "subsystem": "bdev", 00:25:20.809 "config": [ 00:25:20.809 { 00:25:20.809 "method": "bdev_set_options", 00:25:20.809 "params": { 00:25:20.809 "bdev_io_pool_size": 65535, 00:25:20.809 "bdev_io_cache_size": 256, 00:25:20.809 "bdev_auto_examine": true, 00:25:20.809 "iobuf_small_cache_size": 128, 00:25:20.809 "iobuf_large_cache_size": 16 00:25:20.809 } 00:25:20.809 }, 00:25:20.809 { 00:25:20.809 "method": "bdev_raid_set_options", 00:25:20.809 "params": { 00:25:20.809 "process_window_size_kb": 1024 00:25:20.809 } 00:25:20.809 }, 00:25:20.809 { 00:25:20.809 "method": "bdev_iscsi_set_options", 00:25:20.809 "params": { 00:25:20.809 "timeout_sec": 30 00:25:20.809 } 00:25:20.809 }, 00:25:20.809 { 00:25:20.809 "method": "bdev_nvme_set_options", 00:25:20.809 "params": { 00:25:20.809 "action_on_timeout": "none", 00:25:20.809 "timeout_us": 0, 00:25:20.809 "timeout_admin_us": 0, 00:25:20.809 "keep_alive_timeout_ms": 10000, 00:25:20.809 "arbitration_burst": 0, 00:25:20.809 "low_priority_weight": 0, 00:25:20.809 "medium_priority_weight": 0, 00:25:20.809 "high_priority_weight": 0, 00:25:20.809 "nvme_adminq_poll_period_us": 10000, 00:25:20.809 "nvme_ioq_poll_period_us": 0, 00:25:20.809 "io_queue_requests": 512, 00:25:20.809 "delay_cmd_submit": true, 00:25:20.809 "transport_retry_count": 4, 00:25:20.809 "bdev_retry_count": 3, 00:25:20.809 "transport_ack_timeout": 0, 00:25:20.809 "ctrlr_loss_timeout_sec": 0, 00:25:20.809 "reconnect_delay_sec": 0, 00:25:20.809 "fast_io_fail_timeout_sec": 0, 00:25:20.809 "disable_auto_failback": false, 00:25:20.809 "generate_uuids": false, 00:25:20.809 "transport_tos": 0, 00:25:20.809 "nvme_error_stat": false, 00:25:20.809 "rdma_srq_size": 0, 00:25:20.809 "io_path_stat": false, 00:25:20.809 "allow_accel_sequence": false, 00:25:20.809 "rdma_max_cq_size": 0, 00:25:20.809 "rdma_cm_event_timeout_ms": 0, 00:25:20.809 "dhchap_digests": [ 00:25:20.809 "sha256", 00:25:20.809 "sha384", 00:25:20.809 "sha512" 00:25:20.809 ], 00:25:20.809 "dhchap_dhgroups": [ 00:25:20.809 "null", 00:25:20.809 "ffdhe2048", 00:25:20.809 "ffdhe3072", 00:25:20.809 "ffdhe4096", 00:25:20.809 "ffdhe6144", 00:25:20.809 "ffdhe8192" 00:25:20.809 ] 00:25:20.809 } 00:25:20.809 }, 00:25:20.809 { 00:25:20.809 "method": "bdev_nvme_attach_controller", 00:25:20.809 "params": { 00:25:20.809 "name": "nvme0", 00:25:20.809 "trtype": "TCP", 00:25:20.809 "adrfam": "IPv4", 00:25:20.809 "traddr": "10.0.0.2", 00:25:20.809 "trsvcid": "4420", 00:25:20.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.809 "prchk_reftag": false, 00:25:20.809 "prchk_guard": false, 00:25:20.809 "ctrlr_loss_timeout_sec": 0, 00:25:20.809 "reconnect_delay_sec": 0, 00:25:20.809 "fast_io_fail_timeout_sec": 0, 00:25:20.809 "psk": "key0", 00:25:20.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:20.809 "hdgst": false, 00:25:20.809 "ddgst": false 00:25:20.809 } 00:25:20.809 }, 00:25:20.809 { 00:25:20.809 "method": "bdev_nvme_set_hotplug", 00:25:20.809 "params": { 00:25:20.809 "period_us": 100000, 00:25:20.809 "enable": false 00:25:20.809 } 00:25:20.809 }, 00:25:20.809 { 00:25:20.809 "method": "bdev_enable_histogram", 00:25:20.809 "params": { 00:25:20.809 "name": "nvme0n1", 00:25:20.809 "enable": true 00:25:20.809 } 00:25:20.809 }, 00:25:20.809 { 00:25:20.809 "method": "bdev_wait_for_examine" 00:25:20.809 } 00:25:20.809 ] 00:25:20.809 }, 00:25:20.809 { 00:25:20.809 "subsystem": "nbd", 00:25:20.809 "config": [] 00:25:20.809 } 00:25:20.809 ] 00:25:20.809 }' 00:25:20.809 23:29:29 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2489400 00:25:20.809 23:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2489400 ']' 00:25:20.809 23:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2489400 00:25:20.809 23:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:20.809 23:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:20.809 23:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2489400 00:25:20.809 23:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:20.809 23:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:20.809 23:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2489400' 00:25:20.809 killing process with pid 2489400 00:25:20.809 23:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2489400 00:25:20.809 Received shutdown signal, test time was about 1.000000 seconds 00:25:20.809 00:25:20.809 Latency(us) 00:25:20.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.809 =================================================================================================================== 00:25:20.809 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.809 23:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2489400 00:25:21.744 23:29:30 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2489157 00:25:21.744 23:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2489157 ']' 00:25:21.744 23:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2489157 00:25:21.744 23:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:21.744 23:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:21.744 23:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2489157 00:25:22.002 23:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:22.002 23:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:22.002 23:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2489157' 00:25:22.002 killing process with pid 2489157 00:25:22.002 23:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2489157 00:25:22.002 23:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2489157 00:25:23.377 23:29:32 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:25:23.377 "subsystems": [ 00:25:23.377 { 00:25:23.377 "subsystem": "keyring", 00:25:23.377 "config": [ 00:25:23.377 { 00:25:23.377 "method": "keyring_file_add_key", 00:25:23.377 "params": { 00:25:23.377 "name": "key0", 00:25:23.377 "path": "/tmp/tmp.PDlaRDnbEz" 00:25:23.377 } 00:25:23.377 } 00:25:23.377 ] 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "subsystem": "iobuf", 00:25:23.377 "config": [ 00:25:23.377 { 00:25:23.377 "method": "iobuf_set_options", 00:25:23.377 "params": { 00:25:23.377 "small_pool_count": 8192, 00:25:23.377 "large_pool_count": 1024, 00:25:23.377 "small_bufsize": 8192, 00:25:23.377 "large_bufsize": 135168 00:25:23.377 } 00:25:23.377 } 00:25:23.377 ] 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "subsystem": "sock", 00:25:23.377 "config": [ 00:25:23.377 { 00:25:23.377 "method": "sock_set_default_impl", 00:25:23.377 "params": { 00:25:23.377 "impl_name": "posix" 00:25:23.377 } 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "method": "sock_impl_set_options", 00:25:23.377 "params": { 00:25:23.377 "impl_name": "ssl", 00:25:23.377 "recv_buf_size": 4096, 00:25:23.377 "send_buf_size": 4096, 00:25:23.377 "enable_recv_pipe": true, 00:25:23.377 "enable_quickack": false, 00:25:23.377 "enable_placement_id": 0, 00:25:23.377 "enable_zerocopy_send_server": true, 00:25:23.377 "enable_zerocopy_send_client": false, 00:25:23.377 "zerocopy_threshold": 0, 00:25:23.377 "tls_version": 0, 00:25:23.377 "enable_ktls": false 00:25:23.377 } 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "method": "sock_impl_set_options", 00:25:23.377 "params": { 00:25:23.377 "impl_name": "posix", 00:25:23.377 "recv_buf_size": 2097152, 00:25:23.377 "send_buf_size": 2097152, 00:25:23.377 "enable_recv_pipe": true, 00:25:23.377 "enable_quickack": false, 00:25:23.377 "enable_placement_id": 0, 00:25:23.377 "enable_zerocopy_send_server": true, 00:25:23.377 "enable_zerocopy_send_client": false, 00:25:23.377 "zerocopy_threshold": 0, 00:25:23.377 "tls_version": 0, 00:25:23.377 "enable_ktls": false 00:25:23.377 } 00:25:23.377 } 00:25:23.377 ] 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "subsystem": "vmd", 00:25:23.377 "config": [] 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "subsystem": "accel", 00:25:23.377 "config": [ 00:25:23.377 { 00:25:23.377 "method": "accel_set_options", 00:25:23.377 "params": { 00:25:23.377 "small_cache_size": 128, 00:25:23.377 "large_cache_size": 16, 00:25:23.377 "task_count": 2048, 00:25:23.377 "sequence_count": 2048, 00:25:23.377 "buf_count": 2048 00:25:23.377 } 00:25:23.377 } 00:25:23.377 ] 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "subsystem": "bdev", 00:25:23.377 "config": [ 00:25:23.377 { 00:25:23.377 "method": "bdev_set_options", 00:25:23.377 "params": { 00:25:23.377 "bdev_io_pool_size": 65535, 00:25:23.377 "bdev_io_cache_size": 256, 00:25:23.377 "bdev_auto_examine": true, 00:25:23.377 "iobuf_small_cache_size": 128, 00:25:23.377 "iobuf_large_cache_size": 16 00:25:23.377 } 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "method": "bdev_raid_set_options", 00:25:23.377 "params": { 00:25:23.377 "process_window_size_kb": 1024 00:25:23.377 } 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "method": "bdev_iscsi_set_options", 00:25:23.377 "params": { 00:25:23.377 "timeout_sec": 30 00:25:23.377 } 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "method": "bdev_nvme_set_options", 00:25:23.377 "params": { 00:25:23.377 "action_on_timeout": "none", 00:25:23.377 "timeout_us": 0, 00:25:23.377 "timeout_admin_us": 0, 00:25:23.377 "keep_alive_timeout_ms": 10000, 00:25:23.377 "arbitration_burst": 0, 00:25:23.377 "low_priority_weight": 0, 00:25:23.377 "medium_priority_weight": 0, 00:25:23.377 "high_priority_weight": 0, 00:25:23.377 "nvme_adminq_poll_period_us": 10000, 00:25:23.377 "nvme_ioq_poll_period_us": 0, 00:25:23.377 "io_queue_requests": 0, 00:25:23.377 "delay_cmd_submit": true, 00:25:23.377 "transport_retry_count": 4, 00:25:23.377 "bdev_retry_count": 3, 00:25:23.377 "transport_ack_timeout": 0, 00:25:23.377 "ctrlr_loss_timeout_sec": 0, 00:25:23.377 "reconnect_delay_sec": 0, 00:25:23.377 "fast_io_fail_timeout_sec": 0, 00:25:23.377 "disable_auto_failback": false, 00:25:23.377 "generate_uuids": false, 00:25:23.377 "transport_tos": 0, 00:25:23.377 "nvme_error_stat": false, 00:25:23.377 "rdma_srq_size": 0, 00:25:23.377 "io_path_stat": false, 00:25:23.377 "allow_accel_sequence": false, 00:25:23.377 "rdma_max_cq_size": 0, 00:25:23.377 "rdma_cm_event_timeout_ms": 0, 00:25:23.377 "dhchap_digests": [ 00:25:23.377 "sha256", 00:25:23.377 "sha384", 00:25:23.377 "sha512" 00:25:23.377 ], 00:25:23.377 "dhchap_dhgroups": [ 00:25:23.377 "null", 00:25:23.377 "ffdhe2048", 00:25:23.377 "ffdhe3072", 00:25:23.377 "ffdhe4096", 00:25:23.377 "ffdhe6144", 00:25:23.377 "ffdhe8192" 00:25:23.377 ] 00:25:23.377 } 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "method": "bdev_nvme_set_hotplug", 00:25:23.377 "params": { 00:25:23.377 "period_us": 100000, 00:25:23.377 "enable": false 00:25:23.377 } 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "method": "bdev_malloc_create", 00:25:23.377 "params": { 00:25:23.377 "name": "malloc0", 00:25:23.377 "num_blocks": 8192, 00:25:23.377 "block_size": 4096, 00:25:23.377 "physical_block_size": 4096, 00:25:23.377 "uuid": "5416e6dd-2a73-4e96-bda3-a57f40458604", 00:25:23.377 "optimal_io_boundary": 0 00:25:23.377 } 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "method": "bdev_wait_for_examine" 00:25:23.377 } 00:25:23.377 ] 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "subsystem": "nbd", 00:25:23.377 "config": [] 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "subsystem": "scheduler", 00:25:23.377 "config": [ 00:25:23.377 { 00:25:23.377 "method": "framework_set_scheduler", 00:25:23.377 "params": { 00:25:23.377 "name": "static" 00:25:23.377 } 00:25:23.377 } 00:25:23.377 ] 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "subsystem": "nvmf", 00:25:23.377 "config": [ 00:25:23.377 { 00:25:23.377 "method": "nvmf_set_config", 00:25:23.377 "params": { 00:25:23.377 "discovery_filter": "match_any", 00:25:23.377 "admin_cmd_passthru": { 00:25:23.377 "identify_ctrlr": false 00:25:23.377 } 00:25:23.377 } 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "method": "nvmf_set_max_subsystems", 00:25:23.377 "params": { 00:25:23.377 "max_subsystems": 1024 00:25:23.377 } 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "method": "nvmf_set_crdt", 00:25:23.377 "params": { 00:25:23.377 "crdt1": 0, 00:25:23.377 "crdt2": 0, 00:25:23.377 "crdt3": 0 00:25:23.377 } 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "method": "nvmf_create_transport", 00:25:23.377 "params": { 00:25:23.377 "trtype": "TCP", 00:25:23.377 "max_queue_depth": 128, 00:25:23.377 "max_io_qpairs_per_ctrlr": 127, 00:25:23.377 "in_capsule_data_size": 4096, 00:25:23.377 "max_io_size": 131072, 00:25:23.377 "io_unit_size": 131072, 00:25:23.377 "max_aq_depth": 128, 00:25:23.377 "num_shared_buffers": 511, 00:25:23.377 "buf_cache_size": 4294967295, 00:25:23.377 "dif_insert_or_strip": false, 00:25:23.377 "zcopy": false, 00:25:23.377 "c2h_success": false, 00:25:23.377 "sock_priority": 0, 00:25:23.377 "abort_timeout_sec": 1, 00:25:23.377 "ack_timeout": 0, 00:25:23.377 "data_wr_pool_size": 0 00:25:23.377 } 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "method": "nvmf_create_subsystem", 00:25:23.377 "params": { 00:25:23.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.377 "allow_any_host": false, 00:25:23.377 "serial_number": "00000000000000000000", 00:25:23.377 "model_number": "SPDK bdev Controller", 00:25:23.377 "max_namespaces": 32, 00:25:23.377 "min_cntlid": 1, 00:25:23.377 "max_cntlid": 65519, 00:25:23.377 "ana_reporting": false 00:25:23.377 } 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "method": "nvmf_subsystem_add_host", 00:25:23.377 "params": { 00:25:23.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.377 "host": "nqn.2016-06.io.spdk:host1", 00:25:23.377 "psk": "key0" 00:25:23.377 } 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "method": "nvmf_subsystem_add_ns", 00:25:23.377 "params": { 00:25:23.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.377 "namespace": { 00:25:23.377 "nsid": 1, 00:25:23.377 "bdev_name": "malloc0", 00:25:23.377 "nguid": "5416E6DD2A734E96BDA3A57F40458604", 00:25:23.377 "uuid": "5416e6dd-2a73-4e96-bda3-a57f40458604", 00:25:23.377 "no_auto_visible": false 00:25:23.377 } 00:25:23.377 } 00:25:23.377 }, 00:25:23.377 { 00:25:23.377 "method": "nvmf_subsystem_add_listener", 00:25:23.377 "params": { 00:25:23.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.377 "listen_address": { 00:25:23.378 "trtype": "TCP", 00:25:23.378 "adrfam": "IPv4", 00:25:23.378 "traddr": "10.0.0.2", 00:25:23.378 "trsvcid": "4420" 00:25:23.378 }, 00:25:23.378 "secure_channel": true 00:25:23.378 } 00:25:23.378 } 00:25:23.378 ] 00:25:23.378 } 00:25:23.378 ] 00:25:23.378 }' 00:25:23.378 23:29:32 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:25:23.378 23:29:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:23.378 23:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:23.378 23:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:23.378 23:29:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2490269 00:25:23.378 23:29:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2490269 00:25:23.378 23:29:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:23.378 23:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2490269 ']' 00:25:23.378 23:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.378 23:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:23.378 23:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.378 23:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:23.378 23:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:23.378 [2024-07-10 23:29:32.292696] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:25:23.378 [2024-07-10 23:29:32.292786] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.378 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.378 [2024-07-10 23:29:32.399983] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.635 [2024-07-10 23:29:32.607698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.635 [2024-07-10 23:29:32.607742] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.635 [2024-07-10 23:29:32.607754] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.635 [2024-07-10 23:29:32.607765] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.635 [2024-07-10 23:29:32.607774] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.635 [2024-07-10 23:29:32.607858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.203 [2024-07-10 23:29:33.160037] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.203 [2024-07-10 23:29:33.192048] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:24.203 [2024-07-10 23:29:33.192277] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.203 23:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:24.203 23:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:24.203 23:29:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:24.203 23:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:24.203 23:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:24.203 23:29:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.203 23:29:33 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2490365 00:25:24.203 23:29:33 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2490365 /var/tmp/bdevperf.sock 00:25:24.203 23:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2490365 ']' 00:25:24.203 23:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:24.203 23:29:33 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:24.203 23:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:24.203 23:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:24.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:24.203 23:29:33 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:25:24.203 "subsystems": [ 00:25:24.203 { 00:25:24.203 "subsystem": "keyring", 00:25:24.203 "config": [ 00:25:24.203 { 00:25:24.203 "method": "keyring_file_add_key", 00:25:24.203 "params": { 00:25:24.203 "name": "key0", 00:25:24.203 "path": "/tmp/tmp.PDlaRDnbEz" 00:25:24.203 } 00:25:24.203 } 00:25:24.203 ] 00:25:24.203 }, 00:25:24.203 { 00:25:24.203 "subsystem": "iobuf", 00:25:24.203 "config": [ 00:25:24.203 { 00:25:24.203 "method": "iobuf_set_options", 00:25:24.203 "params": { 00:25:24.203 "small_pool_count": 8192, 00:25:24.203 "large_pool_count": 1024, 00:25:24.203 "small_bufsize": 8192, 00:25:24.203 "large_bufsize": 135168 00:25:24.203 } 00:25:24.203 } 00:25:24.203 ] 00:25:24.203 }, 00:25:24.203 { 00:25:24.203 "subsystem": "sock", 00:25:24.203 "config": [ 00:25:24.203 { 00:25:24.203 "method": "sock_set_default_impl", 00:25:24.203 "params": { 00:25:24.203 "impl_name": "posix" 00:25:24.203 } 00:25:24.203 }, 00:25:24.203 { 00:25:24.203 "method": "sock_impl_set_options", 00:25:24.203 "params": { 00:25:24.203 "impl_name": "ssl", 00:25:24.203 "recv_buf_size": 4096, 00:25:24.203 "send_buf_size": 4096, 00:25:24.203 "enable_recv_pipe": true, 00:25:24.203 "enable_quickack": false, 00:25:24.203 "enable_placement_id": 0, 00:25:24.203 "enable_zerocopy_send_server": true, 00:25:24.203 "enable_zerocopy_send_client": false, 00:25:24.203 "zerocopy_threshold": 0, 00:25:24.203 "tls_version": 0, 00:25:24.203 "enable_ktls": false 00:25:24.203 } 00:25:24.203 }, 00:25:24.203 { 00:25:24.203 "method": "sock_impl_set_options", 00:25:24.203 "params": { 00:25:24.203 "impl_name": "posix", 00:25:24.203 "recv_buf_size": 2097152, 00:25:24.203 "send_buf_size": 2097152, 00:25:24.203 "enable_recv_pipe": true, 00:25:24.203 "enable_quickack": false, 00:25:24.203 "enable_placement_id": 0, 00:25:24.203 "enable_zerocopy_send_server": true, 00:25:24.203 "enable_zerocopy_send_client": false, 00:25:24.203 "zerocopy_threshold": 0, 00:25:24.204 "tls_version": 0, 00:25:24.204 "enable_ktls": false 00:25:24.204 } 00:25:24.204 } 00:25:24.204 ] 00:25:24.204 }, 00:25:24.204 { 00:25:24.204 "subsystem": "vmd", 00:25:24.204 "config": [] 00:25:24.204 }, 00:25:24.204 { 00:25:24.204 "subsystem": "accel", 00:25:24.204 "config": [ 00:25:24.204 { 00:25:24.204 "method": "accel_set_options", 00:25:24.204 "params": { 00:25:24.204 "small_cache_size": 128, 00:25:24.204 "large_cache_size": 16, 00:25:24.204 "task_count": 2048, 00:25:24.204 "sequence_count": 2048, 00:25:24.204 "buf_count": 2048 00:25:24.204 } 00:25:24.204 } 00:25:24.204 ] 00:25:24.204 }, 00:25:24.204 { 00:25:24.204 "subsystem": "bdev", 00:25:24.204 "config": [ 00:25:24.204 { 00:25:24.204 "method": "bdev_set_options", 00:25:24.204 "params": { 00:25:24.204 "bdev_io_pool_size": 65535, 00:25:24.204 "bdev_io_cache_size": 256, 00:25:24.204 "bdev_auto_examine": true, 00:25:24.204 "iobuf_small_cache_size": 128, 00:25:24.204 "iobuf_large_cache_size": 16 00:25:24.204 } 00:25:24.204 }, 00:25:24.204 { 00:25:24.204 "method": "bdev_raid_set_options", 00:25:24.204 "params": { 00:25:24.204 "process_window_size_kb": 1024 00:25:24.204 } 00:25:24.204 }, 00:25:24.204 { 00:25:24.204 "method": "bdev_iscsi_set_options", 00:25:24.204 "params": { 00:25:24.204 "timeout_sec": 30 00:25:24.204 } 00:25:24.204 }, 00:25:24.204 { 00:25:24.204 "method": "bdev_nvme_set_options", 00:25:24.204 "params": { 00:25:24.204 "action_on_timeout": "none", 00:25:24.204 "timeout_us": 0, 00:25:24.204 "timeout_admin_us": 0, 00:25:24.204 "keep_alive_timeout_ms": 10000, 00:25:24.204 "arbitration_burst": 0, 00:25:24.204 "low_priority_weight": 0, 00:25:24.204 "medium_priority_weight": 0, 00:25:24.204 "high_priority_weight": 0, 00:25:24.204 "nvme_adminq_poll_period_us": 10000, 00:25:24.204 "nvme_ioq_poll_period_us": 0, 00:25:24.204 "io_queue_requests": 512, 00:25:24.204 "delay_cmd_submit": true, 00:25:24.204 "transport_retry_count": 4, 00:25:24.204 "bdev_retry_count": 3, 00:25:24.204 "transport_ack_timeout": 0, 00:25:24.204 "ctrlr_loss_timeout_sec": 0, 00:25:24.204 "reconnect_delay_sec": 0, 00:25:24.204 "fast_io_fail_timeout_sec": 0, 00:25:24.204 "disable_auto_failback": false, 00:25:24.204 "generate_uuids": false, 00:25:24.204 "transport_tos": 0, 00:25:24.204 "nvme_error_stat": false, 00:25:24.204 "rdma_srq_size": 0, 00:25:24.204 "io_path_stat": false, 00:25:24.204 "allow_accel_sequence": false, 00:25:24.204 "rdma_max_cq_size": 0, 00:25:24.204 "rdma_cm_event_timeout_ms": 0, 00:25:24.204 "dhchap_digests": [ 00:25:24.204 "sha256", 00:25:24.204 "sha384", 00:25:24.204 "sha512" 00:25:24.204 ], 00:25:24.204 "dhchap_dhgroups": [ 00:25:24.204 "null", 00:25:24.204 "ffdhe2048", 00:25:24.204 "ffdhe3072", 00:25:24.204 "ffdhe4096", 00:25:24.204 "ffdhe6144", 00:25:24.204 "ffdhe8192" 00:25:24.204 ] 00:25:24.204 } 00:25:24.204 }, 00:25:24.204 { 00:25:24.204 "method": "bdev_nvme_attach_controller", 00:25:24.204 "params": { 00:25:24.204 "name": "nvme0", 00:25:24.204 "trtype": "TCP", 00:25:24.204 "adrfam": "IPv4", 00:25:24.204 "traddr": "10.0.0.2", 00:25:24.204 "trsvcid": "4420", 00:25:24.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.204 "prchk_reftag": false, 00:25:24.204 "prchk_guard": false, 00:25:24.204 "ctrlr_loss_timeout_sec": 0, 00:25:24.204 "reconnect_delay_sec": 0, 00:25:24.204 "fast_io_fail_timeout_sec": 0, 00:25:24.204 "psk": "key0", 00:25:24.204 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:24.204 "hdgst": false, 00:25:24.204 "ddgst": false 00:25:24.204 } 00:25:24.204 }, 00:25:24.204 { 00:25:24.204 "method": "bdev_nvme_set_hotplug", 00:25:24.204 "params": { 00:25:24.204 "period_us": 100000, 00:25:24.204 "enable": false 00:25:24.204 } 00:25:24.204 }, 00:25:24.204 { 00:25:24.204 "method": "bdev_enable_histogram", 00:25:24.204 "params": { 00:25:24.204 "name": "nvme0n1", 00:25:24.204 "enable": true 00:25:24.204 } 00:25:24.204 }, 00:25:24.204 { 00:25:24.204 "method": "bdev_wait_for_examine" 00:25:24.204 } 00:25:24.204 ] 00:25:24.204 }, 00:25:24.204 { 00:25:24.204 "subsystem": "nbd", 00:25:24.204 "config": [] 00:25:24.204 } 00:25:24.204 ] 00:25:24.204 }' 00:25:24.204 23:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:24.204 23:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:24.464 [2024-07-10 23:29:33.327681] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:25:24.464 [2024-07-10 23:29:33.327769] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490365 ] 00:25:24.464 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.464 [2024-07-10 23:29:33.432115] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.721 [2024-07-10 23:29:33.656064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.287 [2024-07-10 23:29:34.085625] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:25.287 23:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:25.287 23:29:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:25.287 23:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:25.287 23:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:25:25.545 23:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.545 23:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:25.545 Running I/O for 1 seconds... 00:25:26.480 00:25:26.480 Latency(us) 00:25:26.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.480 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:26.480 Verification LBA range: start 0x0 length 0x2000 00:25:26.480 nvme0n1 : 1.02 4431.68 17.31 0.00 0.00 28635.32 5812.76 61546.85 00:25:26.480 =================================================================================================================== 00:25:26.480 Total : 4431.68 17.31 0.00 0.00 28635.32 5812.76 61546.85 00:25:26.480 0 00:25:26.480 23:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:25:26.480 23:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:25:26.480 23:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:26.480 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:25:26.480 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:25:26.480 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:26.480 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:26.480 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:26.480 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:26.480 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:26.480 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:26.480 nvmf_trace.0 00:25:26.739 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:25:26.739 23:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2490365 00:25:26.739 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2490365 ']' 00:25:26.739 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2490365 00:25:26.739 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:26.739 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:26.739 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2490365 00:25:26.739 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:26.739 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:26.739 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2490365' 00:25:26.739 killing process with pid 2490365 00:25:26.739 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2490365 00:25:26.739 Received shutdown signal, test time was about 1.000000 seconds 00:25:26.739 00:25:26.739 Latency(us) 00:25:26.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.739 =================================================================================================================== 00:25:26.739 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:26.739 23:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2490365 00:25:27.673 23:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:27.673 23:29:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:27.673 23:29:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:25:27.673 23:29:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:27.673 23:29:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:25:27.673 23:29:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:27.673 23:29:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:27.673 rmmod nvme_tcp 00:25:27.673 rmmod nvme_fabrics 00:25:27.932 rmmod nvme_keyring 00:25:27.932 23:29:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:27.932 23:29:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:25:27.932 23:29:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:25:27.932 23:29:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2490269 ']' 00:25:27.932 23:29:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2490269 00:25:27.932 23:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2490269 ']' 00:25:27.932 23:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2490269 00:25:27.932 23:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:27.932 23:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:27.932 23:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2490269 00:25:27.932 23:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:27.932 23:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:27.932 23:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2490269' 00:25:27.932 killing process with pid 2490269 00:25:27.932 23:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2490269 00:25:27.932 23:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2490269 00:25:29.310 23:29:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:29.310 23:29:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:29.310 23:29:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:29.310 23:29:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:29.310 23:29:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:29.310 23:29:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.310 23:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.310 23:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.215 23:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:31.215 23:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.WDRewWVc2O /tmp/tmp.uhWZ4QRX9w /tmp/tmp.PDlaRDnbEz 00:25:31.215 00:25:31.215 real 1m46.153s 00:25:31.215 user 2m43.860s 00:25:31.215 sys 0m28.263s 00:25:31.215 23:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:31.215 23:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:31.215 ************************************ 00:25:31.215 END TEST nvmf_tls 00:25:31.215 ************************************ 00:25:31.215 23:29:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:31.215 23:29:40 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:31.215 23:29:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:31.215 23:29:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:31.215 23:29:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:31.475 ************************************ 00:25:31.475 START TEST nvmf_fips 00:25:31.475 ************************************ 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:31.475 * Looking for test storage... 00:25:31.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:25:31.475 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:25:31.476 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:31.735 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:25:31.735 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:31.735 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:25:31.735 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:31.735 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:25:31.735 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:31.735 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:25:31.735 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:25:31.735 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:25:31.735 Error setting digest 00:25:31.735 00C29DF80E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:25:31.735 00C29DF80E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:25:31.735 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:25:31.735 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:31.735 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:31.735 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:31.735 23:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:25:31.735 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:31.735 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.736 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:31.736 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:31.736 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:31.736 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.736 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.736 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.736 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:31.736 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:31.736 23:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:25:31.736 23:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:37.007 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:37.007 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:37.007 Found net devices under 0000:86:00.0: cvl_0_0 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:37.007 Found net devices under 0000:86:00.1: cvl_0_1 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:37.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:25:37.007 00:25:37.007 --- 10.0.0.2 ping statistics --- 00:25:37.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.007 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:25:37.007 00:25:37.007 --- 10.0.0.1 ping statistics --- 00:25:37.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.007 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:37.007 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.008 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:37.008 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:37.008 23:29:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:25:37.008 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:37.008 23:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:37.008 23:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:37.008 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2494604 00:25:37.008 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2494604 00:25:37.008 23:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2494604 ']' 00:25:37.008 23:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.008 23:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:37.008 23:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.008 23:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:37.008 23:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:37.008 23:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:37.008 [2024-07-10 23:29:45.876432] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:25:37.008 [2024-07-10 23:29:45.876523] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.008 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.008 [2024-07-10 23:29:45.983885] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.267 [2024-07-10 23:29:46.205568] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.267 [2024-07-10 23:29:46.205609] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.267 [2024-07-10 23:29:46.205621] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.267 [2024-07-10 23:29:46.205635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.267 [2024-07-10 23:29:46.205644] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.267 [2024-07-10 23:29:46.205676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.834 23:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:37.834 23:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:25:37.834 23:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:37.834 23:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:37.834 23:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:37.834 23:29:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.834 23:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:25:37.834 23:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:37.834 23:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:37.834 23:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:37.834 23:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:37.834 23:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:37.834 23:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:37.834 23:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:37.834 [2024-07-10 23:29:46.797636] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.834 [2024-07-10 23:29:46.813621] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:37.834 [2024-07-10 23:29:46.813827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.834 [2024-07-10 23:29:46.886937] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:37.834 malloc0 00:25:38.092 23:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:38.093 23:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2494853 00:25:38.093 23:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2494853 /var/tmp/bdevperf.sock 00:25:38.093 23:29:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:38.093 23:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2494853 ']' 00:25:38.093 23:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:38.093 23:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:38.093 23:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:38.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:38.093 23:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:38.093 23:29:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:38.093 [2024-07-10 23:29:46.999203] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:25:38.093 [2024-07-10 23:29:46.999313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2494853 ] 00:25:38.093 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.093 [2024-07-10 23:29:47.101909] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.350 [2024-07-10 23:29:47.323673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:38.918 23:29:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:38.918 23:29:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:25:38.918 23:29:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:38.918 [2024-07-10 23:29:47.927261] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:38.918 [2024-07-10 23:29:47.927379] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:39.229 TLSTESTn1 00:25:39.229 23:29:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:39.229 Running I/O for 10 seconds... 00:25:49.225 00:25:49.225 Latency(us) 00:25:49.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.225 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:49.225 Verification LBA range: start 0x0 length 0x2000 00:25:49.225 TLSTESTn1 : 10.04 3955.10 15.45 0.00 0.00 32286.37 6069.20 62914.56 00:25:49.225 =================================================================================================================== 00:25:49.225 Total : 3955.10 15.45 0.00 0.00 32286.37 6069.20 62914.56 00:25:49.225 0 00:25:49.225 23:29:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:49.225 23:29:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:49.225 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:25:49.225 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:25:49.225 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:49.225 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:49.225 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:49.225 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:49.225 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:49.225 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:49.225 nvmf_trace.0 00:25:49.225 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:25:49.225 23:29:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2494853 00:25:49.225 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2494853 ']' 00:25:49.225 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2494853 00:25:49.225 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:49.225 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:49.225 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2494853 00:25:49.485 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:49.485 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:49.485 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2494853' 00:25:49.485 killing process with pid 2494853 00:25:49.485 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2494853 00:25:49.485 Received shutdown signal, test time was about 10.000000 seconds 00:25:49.485 00:25:49.485 Latency(us) 00:25:49.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.485 =================================================================================================================== 00:25:49.485 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:49.485 [2024-07-10 23:29:58.328958] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:49.485 23:29:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2494853 00:25:50.423 23:29:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:50.423 23:29:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:50.423 23:29:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:25:50.423 23:29:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:50.423 23:29:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:25:50.423 23:29:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.423 23:29:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:50.423 rmmod nvme_tcp 00:25:50.423 rmmod nvme_fabrics 00:25:50.423 rmmod nvme_keyring 00:25:50.423 23:29:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.423 23:29:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:25:50.423 23:29:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:25:50.423 23:29:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2494604 ']' 00:25:50.423 23:29:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2494604 00:25:50.423 23:29:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2494604 ']' 00:25:50.423 23:29:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2494604 00:25:50.423 23:29:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:50.423 23:29:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:50.423 23:29:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2494604 00:25:50.682 23:29:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:50.682 23:29:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:50.682 23:29:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2494604' 00:25:50.682 killing process with pid 2494604 00:25:50.682 23:29:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2494604 00:25:50.682 [2024-07-10 23:29:59.504002] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:50.682 23:29:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2494604 00:25:52.061 23:30:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:52.061 23:30:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:52.061 23:30:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:52.061 23:30:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:52.061 23:30:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:52.061 23:30:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.061 23:30:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:52.061 23:30:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.966 23:30:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:53.966 23:30:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:53.966 00:25:53.966 real 0m22.708s 00:25:53.966 user 0m25.874s 00:25:53.966 sys 0m8.498s 00:25:53.966 23:30:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:53.966 23:30:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:53.966 ************************************ 00:25:53.966 END TEST nvmf_fips 00:25:53.966 ************************************ 00:25:54.226 23:30:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:54.226 23:30:03 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:25:54.226 23:30:03 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:54.226 23:30:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:54.226 23:30:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:54.226 23:30:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:54.226 ************************************ 00:25:54.226 START TEST nvmf_fuzz 00:25:54.226 ************************************ 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:54.226 * Looking for test storage... 00:25:54.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:25:54.226 23:30:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:59.500 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:59.500 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.500 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:59.501 Found net devices under 0000:86:00.0: cvl_0_0 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:59.501 Found net devices under 0000:86:00.1: cvl_0_1 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:59.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:25:59.501 00:25:59.501 --- 10.0.0.2 ping statistics --- 00:25:59.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.501 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:25:59.501 00:25:59.501 --- 10.0.0.1 ping statistics --- 00:25:59.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.501 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:59.501 23:30:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:59.760 23:30:08 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2500715 00:25:59.760 23:30:08 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:59.760 23:30:08 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:59.760 23:30:08 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2500715 00:25:59.760 23:30:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2500715 ']' 00:25:59.760 23:30:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.760 23:30:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:59.760 23:30:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.760 23:30:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:59.761 23:30:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:00.698 Malloc0 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:26:00.698 23:30:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:32.771 Fuzzing completed. Shutting down the fuzz application 00:26:32.771 00:26:32.771 Dumping successful admin opcodes: 00:26:32.771 8, 9, 10, 24, 00:26:32.771 Dumping successful io opcodes: 00:26:32.771 0, 9, 00:26:32.771 NS: 0x200003aefec0 I/O qp, Total commands completed: 659894, total successful commands: 3854, random_seed: 2516964288 00:26:32.771 NS: 0x200003aefec0 admin qp, Total commands completed: 77863, total successful commands: 601, random_seed: 2454512320 00:26:32.771 23:30:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:33.337 Fuzzing completed. Shutting down the fuzz application 00:26:33.337 00:26:33.337 Dumping successful admin opcodes: 00:26:33.337 24, 00:26:33.338 Dumping successful io opcodes: 00:26:33.338 00:26:33.338 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2186084530 00:26:33.338 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2186193338 00:26:33.338 23:30:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:33.338 23:30:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.338 23:30:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:33.338 23:30:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.338 23:30:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:33.338 23:30:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:33.338 23:30:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:33.338 23:30:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:26:33.338 23:30:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:33.338 23:30:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:26:33.338 23:30:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:33.338 23:30:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:33.338 rmmod nvme_tcp 00:26:33.338 rmmod nvme_fabrics 00:26:33.595 rmmod nvme_keyring 00:26:33.595 23:30:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:33.595 23:30:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:26:33.595 23:30:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:26:33.595 23:30:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 2500715 ']' 00:26:33.595 23:30:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 2500715 00:26:33.595 23:30:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2500715 ']' 00:26:33.595 23:30:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 2500715 00:26:33.595 23:30:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:26:33.595 23:30:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:33.595 23:30:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2500715 00:26:33.595 23:30:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:33.595 23:30:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:33.595 23:30:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2500715' 00:26:33.595 killing process with pid 2500715 00:26:33.595 23:30:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 2500715 00:26:33.595 23:30:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 2500715 00:26:35.023 23:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:35.023 23:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:35.023 23:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:35.023 23:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:35.023 23:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:35.023 23:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.023 23:30:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:35.023 23:30:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.928 23:30:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:36.928 23:30:45 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:37.187 00:26:37.187 real 0m42.940s 00:26:37.187 user 0m58.082s 00:26:37.187 sys 0m15.324s 00:26:37.187 23:30:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:37.187 23:30:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:37.187 ************************************ 00:26:37.187 END TEST nvmf_fuzz 00:26:37.187 ************************************ 00:26:37.187 23:30:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:37.187 23:30:46 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:37.187 23:30:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:37.187 23:30:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:37.187 23:30:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.187 ************************************ 00:26:37.187 START TEST nvmf_multiconnection 00:26:37.187 ************************************ 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:37.187 * Looking for test storage... 00:26:37.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:37.187 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.188 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:37.188 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:37.188 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:37.188 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.188 23:30:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:37.188 23:30:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.188 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:37.188 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:37.188 23:30:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:26:37.188 23:30:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:42.469 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:42.469 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:42.469 Found net devices under 0000:86:00.0: cvl_0_0 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:42.469 Found net devices under 0000:86:00.1: cvl_0_1 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:42.469 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:42.470 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:42.470 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:42.470 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:42.470 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:42.470 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:42.470 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:42.470 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:42.470 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:42.470 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:42.470 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:42.470 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:42.470 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:42.470 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:42.470 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:42.470 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:42.470 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:42.470 23:30:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:42.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:42.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:26:42.470 00:26:42.470 --- 10.0.0.2 ping statistics --- 00:26:42.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.470 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:42.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:42.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:26:42.470 00:26:42.470 --- 10.0.0.1 ping statistics --- 00:26:42.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.470 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=2509959 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 2509959 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 2509959 ']' 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:42.470 23:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.470 [2024-07-10 23:30:51.143227] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:26:42.470 [2024-07-10 23:30:51.143314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.470 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.470 [2024-07-10 23:30:51.252093] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:42.470 [2024-07-10 23:30:51.463603] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:42.470 [2024-07-10 23:30:51.463648] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:42.470 [2024-07-10 23:30:51.463660] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:42.470 [2024-07-10 23:30:51.463669] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:42.470 [2024-07-10 23:30:51.463679] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:42.470 [2024-07-10 23:30:51.463751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.470 [2024-07-10 23:30:51.463826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.470 [2024-07-10 23:30:51.463884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.470 [2024-07-10 23:30:51.463899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.039 23:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:43.039 23:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:26:43.039 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:43.039 23:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:43.039 23:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.039 23:30:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.039 23:30:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:43.039 23:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.039 23:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.039 [2024-07-10 23:30:51.970188] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.039 23:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.039 23:30:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:43.039 23:30:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.039 23:30:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:43.039 23:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.039 23:30:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.039 Malloc1 00:26:43.039 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.039 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:43.039 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.039 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.039 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.039 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:43.039 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.039 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.039 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.039 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.039 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.039 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.039 [2024-07-10 23:30:52.095387] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.039 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.039 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.039 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:43.039 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.039 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.299 Malloc2 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.299 Malloc3 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.299 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.300 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:43.300 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.300 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.585 Malloc4 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.585 Malloc5 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.585 Malloc6 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.585 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.846 Malloc7 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.846 Malloc8 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.846 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.105 Malloc9 00:26:44.105 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.105 23:30:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:44.105 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.105 23:30:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.105 Malloc10 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:44.105 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.106 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.365 Malloc11 00:26:44.365 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.365 23:30:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:44.365 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.365 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.365 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.365 23:30:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:44.365 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.365 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.365 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.365 23:30:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:44.365 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.365 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.365 23:30:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.365 23:30:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:44.365 23:30:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.365 23:30:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:45.317 23:30:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:45.317 23:30:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:45.317 23:30:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:45.317 23:30:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:45.317 23:30:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:47.852 23:30:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:47.852 23:30:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:47.852 23:30:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:26:47.852 23:30:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:47.852 23:30:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:47.852 23:30:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:47.852 23:30:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.852 23:30:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:48.788 23:30:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:48.788 23:30:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:48.788 23:30:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:48.788 23:30:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:48.788 23:30:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:50.692 23:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:50.692 23:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:50.692 23:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:26:50.692 23:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:50.692 23:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:50.692 23:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:50.692 23:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.692 23:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:51.629 23:31:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:51.629 23:31:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:51.629 23:31:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:51.629 23:31:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:51.629 23:31:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:54.165 23:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:54.165 23:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:54.165 23:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:54.165 23:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:54.165 23:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:54.165 23:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:54.165 23:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.165 23:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:55.100 23:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:55.100 23:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:55.100 23:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:55.100 23:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:55.100 23:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:57.002 23:31:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:57.002 23:31:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:57.002 23:31:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:57.002 23:31:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:57.002 23:31:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:57.002 23:31:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:57.002 23:31:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:57.002 23:31:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:58.379 23:31:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:58.379 23:31:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:58.379 23:31:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:58.379 23:31:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:58.379 23:31:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:00.364 23:31:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:00.364 23:31:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:00.365 23:31:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:27:00.365 23:31:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:00.365 23:31:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:00.365 23:31:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:00.365 23:31:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.365 23:31:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:27:01.744 23:31:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:27:01.744 23:31:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:01.744 23:31:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:01.744 23:31:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:01.744 23:31:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:03.652 23:31:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:03.652 23:31:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:03.652 23:31:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:27:03.652 23:31:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:03.652 23:31:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:03.652 23:31:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:03.652 23:31:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:03.652 23:31:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:27:05.031 23:31:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:27:05.031 23:31:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:05.031 23:31:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:05.031 23:31:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:05.031 23:31:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:06.935 23:31:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:06.935 23:31:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:06.935 23:31:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:27:06.935 23:31:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:06.935 23:31:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:06.935 23:31:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:06.935 23:31:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:06.935 23:31:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:27:08.313 23:31:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:08.313 23:31:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:08.313 23:31:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:08.313 23:31:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:08.313 23:31:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:10.217 23:31:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:10.217 23:31:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:10.217 23:31:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:27:10.217 23:31:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:10.217 23:31:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:10.217 23:31:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:10.217 23:31:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:10.217 23:31:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:27:12.117 23:31:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:12.117 23:31:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:12.117 23:31:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:12.117 23:31:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:12.117 23:31:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:14.020 23:31:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:14.020 23:31:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:14.020 23:31:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:27:14.020 23:31:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:14.020 23:31:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:14.020 23:31:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:14.020 23:31:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:14.020 23:31:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:15.397 23:31:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:15.397 23:31:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:15.397 23:31:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:15.397 23:31:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:15.397 23:31:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:17.300 23:31:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:17.300 23:31:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:17.300 23:31:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:27:17.300 23:31:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:17.300 23:31:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:17.300 23:31:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:17.300 23:31:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.300 23:31:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:18.676 23:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:18.676 23:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:18.676 23:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:18.676 23:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:18.676 23:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:20.581 23:31:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:20.581 23:31:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:20.581 23:31:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:27:20.581 23:31:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:20.581 23:31:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:20.581 23:31:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:20.581 23:31:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:20.581 [global] 00:27:20.581 thread=1 00:27:20.581 invalidate=1 00:27:20.581 rw=read 00:27:20.581 time_based=1 00:27:20.581 runtime=10 00:27:20.581 ioengine=libaio 00:27:20.581 direct=1 00:27:20.581 bs=262144 00:27:20.581 iodepth=64 00:27:20.581 norandommap=1 00:27:20.581 numjobs=1 00:27:20.581 00:27:20.581 [job0] 00:27:20.581 filename=/dev/nvme0n1 00:27:20.581 [job1] 00:27:20.581 filename=/dev/nvme10n1 00:27:20.840 [job2] 00:27:20.840 filename=/dev/nvme1n1 00:27:20.840 [job3] 00:27:20.840 filename=/dev/nvme2n1 00:27:20.840 [job4] 00:27:20.840 filename=/dev/nvme3n1 00:27:20.840 [job5] 00:27:20.840 filename=/dev/nvme4n1 00:27:20.840 [job6] 00:27:20.840 filename=/dev/nvme5n1 00:27:20.840 [job7] 00:27:20.840 filename=/dev/nvme6n1 00:27:20.840 [job8] 00:27:20.840 filename=/dev/nvme7n1 00:27:20.840 [job9] 00:27:20.840 filename=/dev/nvme8n1 00:27:20.840 [job10] 00:27:20.840 filename=/dev/nvme9n1 00:27:20.840 Could not set queue depth (nvme0n1) 00:27:20.840 Could not set queue depth (nvme10n1) 00:27:20.840 Could not set queue depth (nvme1n1) 00:27:20.840 Could not set queue depth (nvme2n1) 00:27:20.840 Could not set queue depth (nvme3n1) 00:27:20.840 Could not set queue depth (nvme4n1) 00:27:20.840 Could not set queue depth (nvme5n1) 00:27:20.840 Could not set queue depth (nvme6n1) 00:27:20.840 Could not set queue depth (nvme7n1) 00:27:20.840 Could not set queue depth (nvme8n1) 00:27:20.840 Could not set queue depth (nvme9n1) 00:27:21.098 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:21.098 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:21.098 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:21.098 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:21.098 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:21.098 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:21.098 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:21.098 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:21.098 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:21.098 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:21.098 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:21.098 fio-3.35 00:27:21.098 Starting 11 threads 00:27:33.394 00:27:33.394 job0: (groupid=0, jobs=1): err= 0: pid=2516630: Wed Jul 10 23:31:40 2024 00:27:33.394 read: IOPS=681, BW=170MiB/s (179MB/s)(1712MiB/10052msec) 00:27:33.394 slat (usec): min=12, max=109379, avg=948.14, stdev=4311.11 00:27:33.394 clat (usec): min=1753, max=242442, avg=92889.94, stdev=46419.42 00:27:33.394 lat (usec): min=1780, max=294561, avg=93838.08, stdev=47151.48 00:27:33.394 clat percentiles (msec): 00:27:33.394 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 32], 20.00th=[ 53], 00:27:33.394 | 30.00th=[ 69], 40.00th=[ 81], 50.00th=[ 90], 60.00th=[ 100], 00:27:33.394 | 70.00th=[ 112], 80.00th=[ 132], 90.00th=[ 161], 95.00th=[ 182], 00:27:33.394 | 99.00th=[ 201], 99.50th=[ 207], 99.90th=[ 222], 99.95th=[ 228], 00:27:33.394 | 99.99th=[ 243] 00:27:33.394 bw ( KiB/s): min=79360, max=263168, per=8.41%, avg=173721.60, stdev=48158.41, samples=20 00:27:33.394 iops : min= 310, max= 1028, avg=678.60, stdev=188.12, samples=20 00:27:33.394 lat (msec) : 2=0.01%, 4=0.39%, 10=1.46%, 20=3.43%, 50=13.23% 00:27:33.394 lat (msec) : 100=41.68%, 250=39.79% 00:27:33.394 cpu : usr=0.28%, sys=2.62%, ctx=1655, majf=0, minf=4097 00:27:33.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:33.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.394 issued rwts: total=6849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.394 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.394 job1: (groupid=0, jobs=1): err= 0: pid=2516631: Wed Jul 10 23:31:40 2024 00:27:33.394 read: IOPS=603, BW=151MiB/s (158MB/s)(1529MiB/10131msec) 00:27:33.394 slat (usec): min=11, max=152599, avg=1301.11, stdev=5045.44 00:27:33.394 clat (usec): min=1294, max=338829, avg=104625.73, stdev=49600.86 00:27:33.394 lat (usec): min=1330, max=338874, avg=105926.84, stdev=50252.88 00:27:33.394 clat percentiles (msec): 00:27:33.394 | 1.00th=[ 7], 5.00th=[ 16], 10.00th=[ 34], 20.00th=[ 64], 00:27:33.394 | 30.00th=[ 84], 40.00th=[ 95], 50.00th=[ 106], 60.00th=[ 116], 00:27:33.394 | 70.00th=[ 127], 80.00th=[ 142], 90.00th=[ 169], 95.00th=[ 188], 00:27:33.394 | 99.00th=[ 232], 99.50th=[ 251], 99.90th=[ 317], 99.95th=[ 330], 00:27:33.394 | 99.99th=[ 338] 00:27:33.394 bw ( KiB/s): min=90112, max=235008, per=7.50%, avg=154905.60, stdev=41093.33, samples=20 00:27:33.394 iops : min= 352, max= 918, avg=605.10, stdev=160.52, samples=20 00:27:33.394 lat (msec) : 2=0.08%, 4=0.20%, 10=2.24%, 20=4.01%, 50=6.84% 00:27:33.394 lat (msec) : 100=31.06%, 250=55.04%, 500=0.54% 00:27:33.394 cpu : usr=0.31%, sys=2.39%, ctx=1443, majf=0, minf=3347 00:27:33.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:33.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.394 issued rwts: total=6114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.395 job2: (groupid=0, jobs=1): err= 0: pid=2516633: Wed Jul 10 23:31:40 2024 00:27:33.395 read: IOPS=884, BW=221MiB/s (232MB/s)(2231MiB/10089msec) 00:27:33.395 slat (usec): min=9, max=124536, avg=641.05, stdev=3287.22 00:27:33.395 clat (usec): min=734, max=284251, avg=71591.66, stdev=44037.09 00:27:33.395 lat (usec): min=778, max=318660, avg=72232.72, stdev=44413.64 00:27:33.395 clat percentiles (msec): 00:27:33.395 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 18], 20.00th=[ 31], 00:27:33.395 | 30.00th=[ 45], 40.00th=[ 57], 50.00th=[ 66], 60.00th=[ 78], 00:27:33.395 | 70.00th=[ 93], 80.00th=[ 106], 90.00th=[ 129], 95.00th=[ 153], 00:27:33.395 | 99.00th=[ 199], 99.50th=[ 211], 99.90th=[ 234], 99.95th=[ 262], 00:27:33.395 | 99.99th=[ 284] 00:27:33.395 bw ( KiB/s): min=140288, max=391680, per=10.98%, avg=226841.60, stdev=69121.99, samples=20 00:27:33.395 iops : min= 548, max= 1530, avg=886.10, stdev=270.01, samples=20 00:27:33.395 lat (usec) : 750=0.01%, 1000=0.04% 00:27:33.395 lat (msec) : 2=0.18%, 4=0.52%, 10=2.99%, 20=8.14%, 50=22.34% 00:27:33.395 lat (msec) : 100=41.73%, 250=23.97%, 500=0.08% 00:27:33.395 cpu : usr=0.29%, sys=2.90%, ctx=2018, majf=0, minf=4097 00:27:33.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:27:33.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.395 issued rwts: total=8924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.395 job3: (groupid=0, jobs=1): err= 0: pid=2516634: Wed Jul 10 23:31:40 2024 00:27:33.395 read: IOPS=667, BW=167MiB/s (175MB/s)(1691MiB/10128msec) 00:27:33.395 slat (usec): min=10, max=86683, avg=968.34, stdev=3898.55 00:27:33.395 clat (usec): min=761, max=303081, avg=94777.47, stdev=50143.87 00:27:33.395 lat (usec): min=790, max=330860, avg=95745.81, stdev=50784.18 00:27:33.395 clat percentiles (msec): 00:27:33.395 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 27], 20.00th=[ 50], 00:27:33.395 | 30.00th=[ 63], 40.00th=[ 81], 50.00th=[ 95], 60.00th=[ 109], 00:27:33.395 | 70.00th=[ 123], 80.00th=[ 140], 90.00th=[ 159], 95.00th=[ 182], 00:27:33.395 | 99.00th=[ 203], 99.50th=[ 211], 99.90th=[ 296], 99.95th=[ 296], 00:27:33.395 | 99.99th=[ 305] 00:27:33.395 bw ( KiB/s): min=89088, max=311296, per=8.30%, avg=171494.40, stdev=54487.78, samples=20 00:27:33.395 iops : min= 348, max= 1216, avg=669.90, stdev=212.84, samples=20 00:27:33.395 lat (usec) : 1000=0.12% 00:27:33.395 lat (msec) : 2=0.50%, 4=0.74%, 10=3.45%, 20=3.18%, 50=12.19% 00:27:33.395 lat (msec) : 100=33.60%, 250=46.07%, 500=0.16% 00:27:33.395 cpu : usr=0.23%, sys=2.48%, ctx=1725, majf=0, minf=4097 00:27:33.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:33.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.395 issued rwts: total=6762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.395 job4: (groupid=0, jobs=1): err= 0: pid=2516635: Wed Jul 10 23:31:40 2024 00:27:33.395 read: IOPS=847, BW=212MiB/s (222MB/s)(2123MiB/10013msec) 00:27:33.395 slat (usec): min=10, max=155483, avg=924.80, stdev=3549.19 00:27:33.395 clat (usec): min=952, max=263699, avg=74454.06, stdev=35021.22 00:27:33.395 lat (usec): min=980, max=345630, avg=75378.86, stdev=35373.74 00:27:33.395 clat percentiles (msec): 00:27:33.395 | 1.00th=[ 9], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 48], 00:27:33.395 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 79], 00:27:33.395 | 70.00th=[ 88], 80.00th=[ 100], 90.00th=[ 115], 95.00th=[ 132], 00:27:33.395 | 99.00th=[ 192], 99.50th=[ 205], 99.90th=[ 249], 99.95th=[ 249], 00:27:33.395 | 99.99th=[ 264] 00:27:33.395 bw ( KiB/s): min=121856, max=305152, per=10.45%, avg=215751.40, stdev=47965.75, samples=20 00:27:33.395 iops : min= 476, max= 1192, avg=842.75, stdev=187.37, samples=20 00:27:33.395 lat (usec) : 1000=0.01% 00:27:33.395 lat (msec) : 2=0.06%, 4=0.11%, 10=1.17%, 20=2.27%, 50=18.39% 00:27:33.395 lat (msec) : 100=59.10%, 250=18.86%, 500=0.04% 00:27:33.395 cpu : usr=0.46%, sys=3.07%, ctx=1896, majf=0, minf=4097 00:27:33.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:27:33.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.395 issued rwts: total=8490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.395 job5: (groupid=0, jobs=1): err= 0: pid=2516636: Wed Jul 10 23:31:40 2024 00:27:33.395 read: IOPS=778, BW=195MiB/s (204MB/s)(1956MiB/10050msec) 00:27:33.395 slat (usec): min=13, max=154870, avg=700.45, stdev=4011.34 00:27:33.395 clat (usec): min=1896, max=277178, avg=81403.62, stdev=48305.12 00:27:33.395 lat (usec): min=1915, max=349910, avg=82104.07, stdev=48880.66 00:27:33.395 clat percentiles (msec): 00:27:33.395 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 23], 20.00th=[ 36], 00:27:33.395 | 30.00th=[ 52], 40.00th=[ 64], 50.00th=[ 79], 60.00th=[ 91], 00:27:33.395 | 70.00th=[ 104], 80.00th=[ 117], 90.00th=[ 150], 95.00th=[ 171], 00:27:33.395 | 99.00th=[ 213], 99.50th=[ 259], 99.90th=[ 275], 99.95th=[ 279], 00:27:33.395 | 99.99th=[ 279] 00:27:33.395 bw ( KiB/s): min=107520, max=274432, per=9.62%, avg=198707.20, stdev=51295.01, samples=20 00:27:33.395 iops : min= 420, max= 1072, avg=776.20, stdev=200.37, samples=20 00:27:33.395 lat (msec) : 2=0.10%, 4=0.33%, 10=2.86%, 20=5.21%, 50=20.78% 00:27:33.395 lat (msec) : 100=38.15%, 250=32.05%, 500=0.51% 00:27:33.395 cpu : usr=0.28%, sys=2.93%, ctx=1904, majf=0, minf=4097 00:27:33.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:33.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.395 issued rwts: total=7825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.395 job6: (groupid=0, jobs=1): err= 0: pid=2516637: Wed Jul 10 23:31:40 2024 00:27:33.395 read: IOPS=772, BW=193MiB/s (203MB/s)(1958MiB/10130msec) 00:27:33.395 slat (usec): min=9, max=132635, avg=914.69, stdev=4470.22 00:27:33.395 clat (usec): min=1083, max=324089, avg=81787.47, stdev=45198.91 00:27:33.395 lat (usec): min=1117, max=381757, avg=82702.16, stdev=45781.53 00:27:33.395 clat percentiles (msec): 00:27:33.395 | 1.00th=[ 11], 5.00th=[ 30], 10.00th=[ 34], 20.00th=[ 48], 00:27:33.395 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 84], 00:27:33.395 | 70.00th=[ 97], 80.00th=[ 113], 90.00th=[ 142], 95.00th=[ 182], 00:27:33.395 | 99.00th=[ 230], 99.50th=[ 241], 99.90th=[ 284], 99.95th=[ 284], 00:27:33.395 | 99.99th=[ 326] 00:27:33.395 bw ( KiB/s): min=90624, max=379904, per=9.63%, avg=198809.60, stdev=70872.44, samples=20 00:27:33.395 iops : min= 354, max= 1484, avg=776.60, stdev=276.85, samples=20 00:27:33.395 lat (msec) : 2=0.01%, 4=0.17%, 10=0.82%, 20=0.72%, 50=21.48% 00:27:33.395 lat (msec) : 100=48.79%, 250=27.82%, 500=0.20% 00:27:33.395 cpu : usr=0.38%, sys=2.86%, ctx=1801, majf=0, minf=4097 00:27:33.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:33.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.395 issued rwts: total=7830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.395 job7: (groupid=0, jobs=1): err= 0: pid=2516638: Wed Jul 10 23:31:40 2024 00:27:33.395 read: IOPS=848, BW=212MiB/s (222MB/s)(2147MiB/10123msec) 00:27:33.395 slat (usec): min=10, max=120189, avg=901.20, stdev=3251.25 00:27:33.395 clat (usec): min=1452, max=330419, avg=74443.83, stdev=37737.66 00:27:33.395 lat (usec): min=1497, max=330461, avg=75345.03, stdev=38058.97 00:27:33.396 clat percentiles (msec): 00:27:33.396 | 1.00th=[ 5], 5.00th=[ 21], 10.00th=[ 31], 20.00th=[ 46], 00:27:33.396 | 30.00th=[ 56], 40.00th=[ 66], 50.00th=[ 74], 60.00th=[ 81], 00:27:33.396 | 70.00th=[ 87], 80.00th=[ 95], 90.00th=[ 111], 95.00th=[ 136], 00:27:33.396 | 99.00th=[ 207], 99.50th=[ 220], 99.90th=[ 300], 99.95th=[ 313], 00:27:33.396 | 99.99th=[ 330] 00:27:33.396 bw ( KiB/s): min=117760, max=348672, per=10.57%, avg=218188.80, stdev=70472.65, samples=20 00:27:33.396 iops : min= 460, max= 1362, avg=852.30, stdev=275.28, samples=20 00:27:33.396 lat (msec) : 2=0.01%, 4=0.70%, 10=0.93%, 20=3.10%, 50=19.24% 00:27:33.396 lat (msec) : 100=60.11%, 250=15.55%, 500=0.36% 00:27:33.396 cpu : usr=0.33%, sys=3.19%, ctx=1866, majf=0, minf=4097 00:27:33.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:27:33.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.396 issued rwts: total=8586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.396 job8: (groupid=0, jobs=1): err= 0: pid=2516639: Wed Jul 10 23:31:40 2024 00:27:33.396 read: IOPS=708, BW=177MiB/s (186MB/s)(1794MiB/10132msec) 00:27:33.396 slat (usec): min=8, max=55284, avg=1067.69, stdev=3717.24 00:27:33.396 clat (msec): min=3, max=322, avg=89.20, stdev=50.37 00:27:33.396 lat (msec): min=3, max=322, avg=90.27, stdev=50.94 00:27:33.396 clat percentiles (msec): 00:27:33.396 | 1.00th=[ 11], 5.00th=[ 20], 10.00th=[ 29], 20.00th=[ 43], 00:27:33.396 | 30.00th=[ 62], 40.00th=[ 75], 50.00th=[ 87], 60.00th=[ 95], 00:27:33.396 | 70.00th=[ 108], 80.00th=[ 124], 90.00th=[ 163], 95.00th=[ 190], 00:27:33.396 | 99.00th=[ 220], 99.50th=[ 234], 99.90th=[ 313], 99.95th=[ 313], 00:27:33.396 | 99.99th=[ 321] 00:27:33.396 bw ( KiB/s): min=78336, max=434176, per=8.82%, avg=182059.55, stdev=89806.92, samples=20 00:27:33.396 iops : min= 306, max= 1696, avg=711.15, stdev=350.81, samples=20 00:27:33.396 lat (msec) : 4=0.01%, 10=0.92%, 20=4.42%, 50=17.48%, 100=41.30% 00:27:33.396 lat (msec) : 250=35.48%, 500=0.39% 00:27:33.396 cpu : usr=0.19%, sys=2.54%, ctx=1673, majf=0, minf=4097 00:27:33.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:27:33.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.396 issued rwts: total=7174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.396 job9: (groupid=0, jobs=1): err= 0: pid=2516640: Wed Jul 10 23:31:40 2024 00:27:33.396 read: IOPS=726, BW=182MiB/s (190MB/s)(1840MiB/10131msec) 00:27:33.396 slat (usec): min=8, max=71355, avg=1094.94, stdev=4016.03 00:27:33.396 clat (usec): min=1260, max=338977, avg=86901.11, stdev=50394.57 00:27:33.396 lat (usec): min=1306, max=339015, avg=87996.05, stdev=51186.69 00:27:33.396 clat percentiles (msec): 00:27:33.396 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 25], 20.00th=[ 40], 00:27:33.396 | 30.00th=[ 57], 40.00th=[ 68], 50.00th=[ 82], 60.00th=[ 94], 00:27:33.396 | 70.00th=[ 112], 80.00th=[ 129], 90.00th=[ 159], 95.00th=[ 186], 00:27:33.396 | 99.00th=[ 205], 99.50th=[ 215], 99.90th=[ 264], 99.95th=[ 264], 00:27:33.396 | 99.99th=[ 338] 00:27:33.396 bw ( KiB/s): min=101888, max=287232, per=9.04%, avg=186752.00, stdev=61445.36, samples=20 00:27:33.396 iops : min= 398, max= 1122, avg=729.50, stdev=240.02, samples=20 00:27:33.396 lat (msec) : 2=0.12%, 4=0.68%, 10=3.41%, 20=3.52%, 50=16.89% 00:27:33.396 lat (msec) : 100=38.61%, 250=36.54%, 500=0.23% 00:27:33.396 cpu : usr=0.29%, sys=2.70%, ctx=1752, majf=0, minf=4097 00:27:33.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:27:33.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.396 issued rwts: total=7359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.396 job10: (groupid=0, jobs=1): err= 0: pid=2516641: Wed Jul 10 23:31:40 2024 00:27:33.396 read: IOPS=578, BW=145MiB/s (152MB/s)(1455MiB/10054msec) 00:27:33.396 slat (usec): min=11, max=120886, avg=1631.14, stdev=5789.90 00:27:33.396 clat (msec): min=3, max=312, avg=108.79, stdev=39.27 00:27:33.396 lat (msec): min=3, max=312, avg=110.42, stdev=40.04 00:27:33.396 clat percentiles (msec): 00:27:33.396 | 1.00th=[ 23], 5.00th=[ 60], 10.00th=[ 68], 20.00th=[ 80], 00:27:33.396 | 30.00th=[ 88], 40.00th=[ 94], 50.00th=[ 102], 60.00th=[ 110], 00:27:33.396 | 70.00th=[ 118], 80.00th=[ 138], 90.00th=[ 169], 95.00th=[ 190], 00:27:33.396 | 99.00th=[ 209], 99.50th=[ 220], 99.90th=[ 309], 99.95th=[ 313], 00:27:33.396 | 99.99th=[ 313] 00:27:33.396 bw ( KiB/s): min=64512, max=231936, per=7.14%, avg=147353.60, stdev=40938.30, samples=20 00:27:33.396 iops : min= 252, max= 906, avg=575.60, stdev=159.92, samples=20 00:27:33.396 lat (msec) : 4=0.15%, 10=0.26%, 20=0.38%, 50=1.08%, 100=46.72% 00:27:33.396 lat (msec) : 250=51.24%, 500=0.17% 00:27:33.396 cpu : usr=0.31%, sys=2.33%, ctx=1212, majf=0, minf=4097 00:27:33.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:27:33.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.396 issued rwts: total=5820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.396 00:27:33.396 Run status group 0 (all jobs): 00:27:33.396 READ: bw=2017MiB/s (2115MB/s), 145MiB/s-221MiB/s (152MB/s-232MB/s), io=20.0GiB (21.4GB), run=10013-10132msec 00:27:33.396 00:27:33.396 Disk stats (read/write): 00:27:33.396 nvme0n1: ios=13494/0, merge=0/0, ticks=1242716/0, in_queue=1242716, util=97.29% 00:27:33.396 nvme10n1: ios=12088/0, merge=0/0, ticks=1229142/0, in_queue=1229142, util=97.49% 00:27:33.396 nvme1n1: ios=17467/0, merge=0/0, ticks=1240217/0, in_queue=1240217, util=97.69% 00:27:33.396 nvme2n1: ios=13327/0, merge=0/0, ticks=1229929/0, in_queue=1229929, util=97.88% 00:27:33.396 nvme3n1: ios=16601/0, merge=0/0, ticks=1242032/0, in_queue=1242032, util=97.96% 00:27:33.396 nvme4n1: ios=15434/0, merge=0/0, ticks=1242230/0, in_queue=1242230, util=98.32% 00:27:33.396 nvme5n1: ios=15526/0, merge=0/0, ticks=1238147/0, in_queue=1238147, util=98.43% 00:27:33.396 nvme6n1: ios=16997/0, merge=0/0, ticks=1230024/0, in_queue=1230024, util=98.55% 00:27:33.396 nvme7n1: ios=14200/0, merge=0/0, ticks=1230376/0, in_queue=1230376, util=98.94% 00:27:33.396 nvme8n1: ios=14527/0, merge=0/0, ticks=1225807/0, in_queue=1225807, util=99.11% 00:27:33.396 nvme9n1: ios=11420/0, merge=0/0, ticks=1230197/0, in_queue=1230197, util=99.24% 00:27:33.396 23:31:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:33.396 [global] 00:27:33.396 thread=1 00:27:33.396 invalidate=1 00:27:33.396 rw=randwrite 00:27:33.396 time_based=1 00:27:33.396 runtime=10 00:27:33.396 ioengine=libaio 00:27:33.396 direct=1 00:27:33.396 bs=262144 00:27:33.396 iodepth=64 00:27:33.396 norandommap=1 00:27:33.396 numjobs=1 00:27:33.396 00:27:33.396 [job0] 00:27:33.396 filename=/dev/nvme0n1 00:27:33.396 [job1] 00:27:33.396 filename=/dev/nvme10n1 00:27:33.396 [job2] 00:27:33.396 filename=/dev/nvme1n1 00:27:33.396 [job3] 00:27:33.396 filename=/dev/nvme2n1 00:27:33.396 [job4] 00:27:33.396 filename=/dev/nvme3n1 00:27:33.396 [job5] 00:27:33.396 filename=/dev/nvme4n1 00:27:33.396 [job6] 00:27:33.396 filename=/dev/nvme5n1 00:27:33.396 [job7] 00:27:33.396 filename=/dev/nvme6n1 00:27:33.396 [job8] 00:27:33.396 filename=/dev/nvme7n1 00:27:33.396 [job9] 00:27:33.396 filename=/dev/nvme8n1 00:27:33.396 [job10] 00:27:33.396 filename=/dev/nvme9n1 00:27:33.396 Could not set queue depth (nvme0n1) 00:27:33.396 Could not set queue depth (nvme10n1) 00:27:33.396 Could not set queue depth (nvme1n1) 00:27:33.397 Could not set queue depth (nvme2n1) 00:27:33.397 Could not set queue depth (nvme3n1) 00:27:33.397 Could not set queue depth (nvme4n1) 00:27:33.397 Could not set queue depth (nvme5n1) 00:27:33.397 Could not set queue depth (nvme6n1) 00:27:33.397 Could not set queue depth (nvme7n1) 00:27:33.397 Could not set queue depth (nvme8n1) 00:27:33.397 Could not set queue depth (nvme9n1) 00:27:33.397 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.397 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.397 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.397 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.397 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.397 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.397 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.397 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.397 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.397 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.397 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.397 fio-3.35 00:27:33.397 Starting 11 threads 00:27:43.376 00:27:43.376 job0: (groupid=0, jobs=1): err= 0: pid=2518170: Wed Jul 10 23:31:51 2024 00:27:43.376 write: IOPS=461, BW=115MiB/s (121MB/s)(1172MiB/10165msec); 0 zone resets 00:27:43.376 slat (usec): min=28, max=132671, avg=1734.93, stdev=5264.31 00:27:43.376 clat (msec): min=3, max=380, avg=136.93, stdev=78.09 00:27:43.376 lat (msec): min=3, max=380, avg=138.66, stdev=79.23 00:27:43.376 clat percentiles (msec): 00:27:43.376 | 1.00th=[ 10], 5.00th=[ 25], 10.00th=[ 46], 20.00th=[ 64], 00:27:43.376 | 30.00th=[ 85], 40.00th=[ 107], 50.00th=[ 136], 60.00th=[ 150], 00:27:43.376 | 70.00th=[ 169], 80.00th=[ 197], 90.00th=[ 266], 95.00th=[ 288], 00:27:43.376 | 99.00th=[ 317], 99.50th=[ 321], 99.90th=[ 372], 99.95th=[ 372], 00:27:43.376 | 99.99th=[ 380] 00:27:43.376 bw ( KiB/s): min=49152, max=233472, per=8.08%, avg=118374.40, stdev=53112.47, samples=20 00:27:43.376 iops : min= 192, max= 912, avg=462.40, stdev=207.47, samples=20 00:27:43.376 lat (msec) : 4=0.11%, 10=1.02%, 20=2.47%, 50=8.41%, 100=25.20% 00:27:43.376 lat (msec) : 250=51.38%, 500=11.41% 00:27:43.376 cpu : usr=0.94%, sys=1.55%, ctx=2311, majf=0, minf=1 00:27:43.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:43.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:43.376 issued rwts: total=0,4687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.376 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:43.376 job1: (groupid=0, jobs=1): err= 0: pid=2518182: Wed Jul 10 23:31:51 2024 00:27:43.376 write: IOPS=483, BW=121MiB/s (127MB/s)(1228MiB/10164msec); 0 zone resets 00:27:43.376 slat (usec): min=20, max=72201, avg=1644.31, stdev=4599.11 00:27:43.376 clat (msec): min=2, max=336, avg=130.71, stdev=77.14 00:27:43.376 lat (msec): min=2, max=336, avg=132.35, stdev=78.09 00:27:43.376 clat percentiles (msec): 00:27:43.376 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 49], 20.00th=[ 53], 00:27:43.376 | 30.00th=[ 75], 40.00th=[ 87], 50.00th=[ 113], 60.00th=[ 159], 00:27:43.376 | 70.00th=[ 178], 80.00th=[ 203], 90.00th=[ 253], 95.00th=[ 271], 00:27:43.376 | 99.00th=[ 279], 99.50th=[ 292], 99.90th=[ 330], 99.95th=[ 334], 00:27:43.376 | 99.99th=[ 338] 00:27:43.376 bw ( KiB/s): min=59392, max=250368, per=8.47%, avg=124093.50, stdev=57284.96, samples=20 00:27:43.376 iops : min= 232, max= 978, avg=484.70, stdev=223.75, samples=20 00:27:43.376 lat (msec) : 4=0.14%, 10=1.43%, 20=2.28%, 50=8.41%, 100=35.14% 00:27:43.376 lat (msec) : 250=42.43%, 500=10.18% 00:27:43.376 cpu : usr=1.00%, sys=1.63%, ctx=2195, majf=0, minf=1 00:27:43.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:43.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:43.376 issued rwts: total=0,4912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.376 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:43.376 job2: (groupid=0, jobs=1): err= 0: pid=2518189: Wed Jul 10 23:31:51 2024 00:27:43.376 write: IOPS=645, BW=161MiB/s (169MB/s)(1643MiB/10175msec); 0 zone resets 00:27:43.376 slat (usec): min=25, max=462822, avg=1408.48, stdev=6665.49 00:27:43.376 clat (msec): min=3, max=519, avg=97.60, stdev=59.51 00:27:43.376 lat (msec): min=3, max=523, avg=99.01, stdev=60.02 00:27:43.376 clat percentiles (msec): 00:27:43.376 | 1.00th=[ 14], 5.00th=[ 36], 10.00th=[ 50], 20.00th=[ 54], 00:27:43.376 | 30.00th=[ 63], 40.00th=[ 83], 50.00th=[ 87], 60.00th=[ 91], 00:27:43.376 | 70.00th=[ 112], 80.00th=[ 140], 90.00th=[ 157], 95.00th=[ 167], 00:27:43.376 | 99.00th=[ 380], 99.50th=[ 493], 99.90th=[ 518], 99.95th=[ 518], 00:27:43.376 | 99.99th=[ 518] 00:27:43.376 bw ( KiB/s): min=82432, max=294912, per=11.37%, avg=166630.40, stdev=57392.45, samples=20 00:27:43.376 iops : min= 322, max= 1152, avg=650.90, stdev=224.19, samples=20 00:27:43.376 lat (msec) : 4=0.12%, 10=0.43%, 20=1.69%, 50=8.90%, 100=55.25% 00:27:43.376 lat (msec) : 250=32.01%, 500=1.23%, 750=0.37% 00:27:43.376 cpu : usr=1.61%, sys=2.03%, ctx=1993, majf=0, minf=1 00:27:43.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:43.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:43.376 issued rwts: total=0,6572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.376 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:43.376 job3: (groupid=0, jobs=1): err= 0: pid=2518190: Wed Jul 10 23:31:51 2024 00:27:43.376 write: IOPS=412, BW=103MiB/s (108MB/s)(1045MiB/10142msec); 0 zone resets 00:27:43.376 slat (usec): min=27, max=58470, avg=2218.43, stdev=4981.99 00:27:43.377 clat (msec): min=4, max=317, avg=152.91, stdev=68.45 00:27:43.377 lat (msec): min=4, max=317, avg=155.13, stdev=69.47 00:27:43.377 clat percentiles (msec): 00:27:43.377 | 1.00th=[ 14], 5.00th=[ 40], 10.00th=[ 83], 20.00th=[ 89], 00:27:43.377 | 30.00th=[ 111], 40.00th=[ 140], 50.00th=[ 150], 60.00th=[ 161], 00:27:43.377 | 70.00th=[ 174], 80.00th=[ 209], 90.00th=[ 268], 95.00th=[ 284], 00:27:43.377 | 99.00th=[ 309], 99.50th=[ 313], 99.90th=[ 317], 99.95th=[ 317], 00:27:43.377 | 99.99th=[ 317] 00:27:43.377 bw ( KiB/s): min=53248, max=199680, per=7.19%, avg=105395.20, stdev=45020.24, samples=20 00:27:43.377 iops : min= 208, max= 780, avg=411.70, stdev=175.86, samples=20 00:27:43.377 lat (msec) : 10=0.72%, 20=1.03%, 50=4.33%, 100=21.48%, 250=59.81% 00:27:43.377 lat (msec) : 500=12.63% 00:27:43.377 cpu : usr=1.07%, sys=1.44%, ctx=1538, majf=0, minf=1 00:27:43.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:43.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:43.377 issued rwts: total=0,4180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.377 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:43.377 job4: (groupid=0, jobs=1): err= 0: pid=2518191: Wed Jul 10 23:31:51 2024 00:27:43.377 write: IOPS=528, BW=132MiB/s (139MB/s)(1341MiB/10146msec); 0 zone resets 00:27:43.377 slat (usec): min=18, max=79361, avg=1631.30, stdev=4061.45 00:27:43.377 clat (msec): min=2, max=309, avg=119.40, stdev=68.74 00:27:43.377 lat (msec): min=2, max=309, avg=121.03, stdev=69.65 00:27:43.377 clat percentiles (msec): 00:27:43.377 | 1.00th=[ 11], 5.00th=[ 40], 10.00th=[ 50], 20.00th=[ 82], 00:27:43.377 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 88], 60.00th=[ 103], 00:27:43.377 | 70.00th=[ 144], 80.00th=[ 171], 90.00th=[ 245], 95.00th=[ 275], 00:27:43.377 | 99.00th=[ 296], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:27:43.377 | 99.99th=[ 309] 00:27:43.377 bw ( KiB/s): min=57344, max=222720, per=9.26%, avg=135665.25, stdev=56450.94, samples=20 00:27:43.377 iops : min= 224, max= 870, avg=529.90, stdev=220.53, samples=20 00:27:43.377 lat (msec) : 4=0.11%, 10=0.80%, 20=1.45%, 50=9.01%, 100=47.60% 00:27:43.377 lat (msec) : 250=32.05%, 500=8.97% 00:27:43.377 cpu : usr=1.28%, sys=1.63%, ctx=2123, majf=0, minf=1 00:27:43.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:43.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:43.377 issued rwts: total=0,5363,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.377 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:43.377 job5: (groupid=0, jobs=1): err= 0: pid=2518192: Wed Jul 10 23:31:51 2024 00:27:43.377 write: IOPS=616, BW=154MiB/s (162MB/s)(1565MiB/10149msec); 0 zone resets 00:27:43.377 slat (usec): min=18, max=33389, avg=1527.79, stdev=2966.14 00:27:43.377 clat (msec): min=5, max=300, avg=102.21, stdev=38.83 00:27:43.377 lat (msec): min=5, max=300, avg=103.73, stdev=39.29 00:27:43.377 clat percentiles (msec): 00:27:43.377 | 1.00th=[ 33], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 71], 00:27:43.377 | 30.00th=[ 85], 40.00th=[ 88], 50.00th=[ 92], 60.00th=[ 95], 00:27:43.377 | 70.00th=[ 122], 80.00th=[ 142], 90.00th=[ 157], 95.00th=[ 163], 00:27:43.377 | 99.00th=[ 199], 99.50th=[ 220], 99.90th=[ 279], 99.95th=[ 292], 00:27:43.377 | 99.99th=[ 300] 00:27:43.377 bw ( KiB/s): min=92160, max=268800, per=10.82%, avg=158592.00, stdev=48960.15, samples=20 00:27:43.377 iops : min= 360, max= 1050, avg=619.50, stdev=191.25, samples=20 00:27:43.377 lat (msec) : 10=0.11%, 20=0.35%, 50=3.85%, 100=57.84%, 250=37.56% 00:27:43.377 lat (msec) : 500=0.29% 00:27:43.377 cpu : usr=1.36%, sys=1.53%, ctx=1897, majf=0, minf=1 00:27:43.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:43.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:43.377 issued rwts: total=0,6259,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.377 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:43.377 job6: (groupid=0, jobs=1): err= 0: pid=2518193: Wed Jul 10 23:31:51 2024 00:27:43.377 write: IOPS=615, BW=154MiB/s (161MB/s)(1564MiB/10161msec); 0 zone resets 00:27:43.377 slat (usec): min=26, max=50446, avg=1467.68, stdev=2904.87 00:27:43.377 clat (msec): min=3, max=348, avg=102.33, stdev=36.72 00:27:43.377 lat (msec): min=4, max=348, avg=103.80, stdev=37.10 00:27:43.377 clat percentiles (msec): 00:27:43.377 | 1.00th=[ 18], 5.00th=[ 59], 10.00th=[ 82], 20.00th=[ 86], 00:27:43.377 | 30.00th=[ 88], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 94], 00:27:43.377 | 70.00th=[ 103], 80.00th=[ 124], 90.00th=[ 153], 95.00th=[ 182], 00:27:43.377 | 99.00th=[ 209], 99.50th=[ 251], 99.90th=[ 338], 99.95th=[ 342], 00:27:43.377 | 99.99th=[ 351] 00:27:43.377 bw ( KiB/s): min=92160, max=212480, per=10.82%, avg=158500.10, stdev=36860.13, samples=20 00:27:43.377 iops : min= 360, max= 830, avg=619.10, stdev=144.05, samples=20 00:27:43.377 lat (msec) : 4=0.02%, 10=0.34%, 20=0.91%, 50=3.04%, 100=64.49% 00:27:43.377 lat (msec) : 250=30.73%, 500=0.48% 00:27:43.377 cpu : usr=1.54%, sys=1.99%, ctx=2068, majf=0, minf=1 00:27:43.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:43.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:43.377 issued rwts: total=0,6254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.377 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:43.377 job7: (groupid=0, jobs=1): err= 0: pid=2518194: Wed Jul 10 23:31:51 2024 00:27:43.377 write: IOPS=432, BW=108MiB/s (113MB/s)(1092MiB/10092msec); 0 zone resets 00:27:43.377 slat (usec): min=23, max=68942, avg=1800.52, stdev=4597.88 00:27:43.377 clat (msec): min=2, max=302, avg=145.90, stdev=66.90 00:27:43.377 lat (msec): min=4, max=302, avg=147.70, stdev=67.90 00:27:43.377 clat percentiles (msec): 00:27:43.377 | 1.00th=[ 9], 5.00th=[ 31], 10.00th=[ 66], 20.00th=[ 103], 00:27:43.377 | 30.00th=[ 117], 40.00th=[ 125], 50.00th=[ 136], 60.00th=[ 148], 00:27:43.377 | 70.00th=[ 163], 80.00th=[ 207], 90.00th=[ 253], 95.00th=[ 271], 00:27:43.377 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 305], 99.95th=[ 305], 00:27:43.377 | 99.99th=[ 305] 00:27:43.377 bw ( KiB/s): min=57344, max=168448, per=7.52%, avg=110244.55, stdev=35773.58, samples=20 00:27:43.377 iops : min= 224, max= 658, avg=430.60, stdev=139.74, samples=20 00:27:43.377 lat (msec) : 4=0.07%, 10=1.33%, 20=2.33%, 50=3.69%, 100=11.97% 00:27:43.377 lat (msec) : 250=70.20%, 500=10.41% 00:27:43.377 cpu : usr=1.00%, sys=1.29%, ctx=2211, majf=0, minf=1 00:27:43.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:43.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:43.377 issued rwts: total=0,4369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.377 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:43.377 job8: (groupid=0, jobs=1): err= 0: pid=2518197: Wed Jul 10 23:31:51 2024 00:27:43.377 write: IOPS=451, BW=113MiB/s (118MB/s)(1140MiB/10091msec); 0 zone resets 00:27:43.377 slat (usec): min=21, max=114333, avg=1867.50, stdev=5044.21 00:27:43.377 clat (usec): min=1818, max=362428, avg=139695.16, stdev=76896.81 00:27:43.377 lat (usec): min=1872, max=362494, avg=141562.66, stdev=77878.65 00:27:43.377 clat percentiles (msec): 00:27:43.377 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 39], 20.00th=[ 87], 00:27:43.377 | 30.00th=[ 92], 40.00th=[ 114], 50.00th=[ 133], 60.00th=[ 150], 00:27:43.377 | 70.00th=[ 169], 80.00th=[ 197], 90.00th=[ 249], 95.00th=[ 300], 00:27:43.377 | 99.00th=[ 334], 99.50th=[ 338], 99.90th=[ 359], 99.95th=[ 359], 00:27:43.377 | 99.99th=[ 363] 00:27:43.377 bw ( KiB/s): min=53248, max=205824, per=7.85%, avg=115072.00, stdev=50978.25, samples=20 00:27:43.377 iops : min= 208, max= 804, avg=449.50, stdev=199.13, samples=20 00:27:43.377 lat (msec) : 2=0.02%, 4=0.53%, 10=1.93%, 20=2.63%, 50=7.22% 00:27:43.377 lat (msec) : 100=22.14%, 250=55.77%, 500=9.76% 00:27:43.377 cpu : usr=0.96%, sys=1.33%, ctx=2016, majf=0, minf=1 00:27:43.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:43.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:43.377 issued rwts: total=0,4558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.377 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:43.377 job9: (groupid=0, jobs=1): err= 0: pid=2518198: Wed Jul 10 23:31:51 2024 00:27:43.377 write: IOPS=512, BW=128MiB/s (134MB/s)(1299MiB/10145msec); 0 zone resets 00:27:43.377 slat (usec): min=18, max=67521, avg=1572.57, stdev=3819.14 00:27:43.377 clat (usec): min=1418, max=303486, avg=123313.62, stdev=62329.67 00:27:43.377 lat (usec): min=1470, max=303544, avg=124886.19, stdev=63206.52 00:27:43.377 clat percentiles (msec): 00:27:43.377 | 1.00th=[ 14], 5.00th=[ 54], 10.00th=[ 71], 20.00th=[ 83], 00:27:43.377 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 99], 60.00th=[ 120], 00:27:43.377 | 70.00th=[ 144], 80.00th=[ 161], 90.00th=[ 232], 95.00th=[ 271], 00:27:43.377 | 99.00th=[ 288], 99.50th=[ 292], 99.90th=[ 300], 99.95th=[ 300], 00:27:43.377 | 99.99th=[ 305] 00:27:43.377 bw ( KiB/s): min=59392, max=202752, per=8.97%, avg=131430.40, stdev=49884.26, samples=20 00:27:43.377 iops : min= 232, max= 792, avg=513.40, stdev=194.86, samples=20 00:27:43.377 lat (msec) : 2=0.04%, 4=0.21%, 10=0.58%, 20=0.69%, 50=3.23% 00:27:43.377 lat (msec) : 100=45.91%, 250=41.08%, 500=8.25% 00:27:43.377 cpu : usr=1.26%, sys=1.36%, ctx=2280, majf=0, minf=1 00:27:43.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:43.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:43.377 issued rwts: total=0,5197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.377 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:43.377 job10: (groupid=0, jobs=1): err= 0: pid=2518199: Wed Jul 10 23:31:51 2024 00:27:43.377 write: IOPS=580, BW=145MiB/s (152MB/s)(1474MiB/10159msec); 0 zone resets 00:27:43.377 slat (usec): min=15, max=34442, avg=1328.23, stdev=3259.57 00:27:43.377 clat (msec): min=2, max=361, avg=108.93, stdev=63.75 00:27:43.377 lat (msec): min=2, max=361, avg=110.25, stdev=64.59 00:27:43.377 clat percentiles (msec): 00:27:43.377 | 1.00th=[ 9], 5.00th=[ 19], 10.00th=[ 31], 20.00th=[ 53], 00:27:43.377 | 30.00th=[ 73], 40.00th=[ 87], 50.00th=[ 102], 60.00th=[ 122], 00:27:43.377 | 70.00th=[ 138], 80.00th=[ 157], 90.00th=[ 180], 95.00th=[ 255], 00:27:43.377 | 99.00th=[ 288], 99.50th=[ 300], 99.90th=[ 347], 99.95th=[ 347], 00:27:43.377 | 99.99th=[ 363] 00:27:43.377 bw ( KiB/s): min=61440, max=304640, per=10.19%, avg=149281.65, stdev=58912.07, samples=20 00:27:43.377 iops : min= 240, max= 1190, avg=583.10, stdev=230.11, samples=20 00:27:43.377 lat (msec) : 4=0.07%, 10=1.44%, 20=4.31%, 50=10.87%, 100=33.15% 00:27:43.377 lat (msec) : 250=44.85%, 500=5.31% 00:27:43.377 cpu : usr=1.17%, sys=1.74%, ctx=2894, majf=0, minf=1 00:27:43.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:27:43.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:43.378 issued rwts: total=0,5895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.378 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:43.378 00:27:43.378 Run status group 0 (all jobs): 00:27:43.378 WRITE: bw=1431MiB/s (1501MB/s), 103MiB/s-161MiB/s (108MB/s-169MB/s), io=14.2GiB (15.3GB), run=10091-10175msec 00:27:43.378 00:27:43.378 Disk stats (read/write): 00:27:43.378 nvme0n1: ios=43/9221, merge=0/0, ticks=3208/1189426, in_queue=1192634, util=100.00% 00:27:43.378 nvme10n1: ios=46/9650, merge=0/0, ticks=1567/1208636, in_queue=1210203, util=99.87% 00:27:43.378 nvme1n1: ios=52/12990, merge=0/0, ticks=5072/1121528, in_queue=1126600, util=100.00% 00:27:43.378 nvme2n1: ios=40/8191, merge=0/0, ticks=1342/1207599, in_queue=1208941, util=100.00% 00:27:43.378 nvme3n1: ios=0/10552, merge=0/0, ticks=0/1211695, in_queue=1211695, util=96.33% 00:27:43.378 nvme4n1: ios=0/12345, merge=0/0, ticks=0/1208895, in_queue=1208895, util=97.03% 00:27:43.378 nvme5n1: ios=38/12340, merge=0/0, ticks=817/1205242, in_queue=1206059, util=99.89% 00:27:43.378 nvme6n1: ios=42/8547, merge=0/0, ticks=779/1217695, in_queue=1218474, util=99.92% 00:27:43.378 nvme7n1: ios=34/8872, merge=0/0, ticks=1266/1215194, in_queue=1216460, util=99.91% 00:27:43.378 nvme8n1: ios=0/10223, merge=0/0, ticks=0/1214965, in_queue=1214965, util=98.79% 00:27:43.378 nvme9n1: ios=0/11622, merge=0/0, ticks=0/1213029, in_queue=1213029, util=99.03% 00:27:43.378 23:31:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:43.378 23:31:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:43.378 23:31:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:43.378 23:31:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:43.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:43.378 23:31:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:43.378 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:43.378 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:43.378 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:27:43.378 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:43.378 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:27:43.378 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:43.378 23:31:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:43.378 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.378 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:43.378 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.378 23:31:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:43.378 23:31:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:43.636 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:43.637 23:31:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:43.637 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:43.637 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:43.637 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:27:43.637 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:27:43.637 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:43.895 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:43.895 23:31:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:43.895 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.895 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:43.895 23:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.895 23:31:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:43.895 23:31:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:44.462 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:44.462 23:31:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:44.462 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:44.462 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:44.462 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:27:44.462 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:27:44.462 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:44.462 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:44.462 23:31:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:44.462 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.462 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:44.462 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.462 23:31:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:44.462 23:31:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:45.029 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:45.029 23:31:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:45.029 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:45.029 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:45.029 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:27:45.029 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:27:45.029 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:45.029 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:45.029 23:31:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:45.029 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.029 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:45.029 23:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.029 23:31:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:45.029 23:31:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:45.287 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:45.287 23:31:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:45.287 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:45.287 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:45.287 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:27:45.287 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:45.287 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:27:45.287 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:45.287 23:31:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:45.287 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.287 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:45.287 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.287 23:31:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:45.288 23:31:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:45.856 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:45.856 23:31:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:45.856 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:45.856 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:45.856 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:27:45.856 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:45.856 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:27:45.856 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:45.856 23:31:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:45.856 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.856 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:45.856 23:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.856 23:31:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:45.856 23:31:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:46.116 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:46.116 23:31:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:46.116 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:46.116 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:46.116 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:27:46.116 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:46.116 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:27:46.116 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:46.116 23:31:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:46.116 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.116 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:46.116 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.116 23:31:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:46.116 23:31:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:46.375 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:46.375 23:31:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:46.375 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:46.375 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:46.375 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:27:46.375 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:46.375 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:27:46.375 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:46.375 23:31:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:46.375 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.375 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:46.375 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.375 23:31:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:46.375 23:31:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:46.944 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:46.944 23:31:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:46.944 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:46.944 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:46.944 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:27:46.944 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:46.944 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:27:46.944 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:46.944 23:31:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:46.944 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.944 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:46.944 23:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.944 23:31:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:46.944 23:31:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:47.204 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:47.204 23:31:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:47.204 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:47.204 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:47.204 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:27:47.204 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:47.204 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:27:47.204 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:47.204 23:31:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:47.204 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.204 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:47.204 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.204 23:31:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:47.204 23:31:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:47.464 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:47.464 23:31:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:47.464 rmmod nvme_tcp 00:27:47.464 rmmod nvme_fabrics 00:27:47.464 rmmod nvme_keyring 00:27:47.723 23:31:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:47.723 23:31:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:27:47.723 23:31:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:27:47.723 23:31:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 2509959 ']' 00:27:47.723 23:31:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 2509959 00:27:47.723 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 2509959 ']' 00:27:47.723 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 2509959 00:27:47.723 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:27:47.723 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:47.723 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2509959 00:27:47.723 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:47.724 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:47.724 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2509959' 00:27:47.724 killing process with pid 2509959 00:27:47.724 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 2509959 00:27:47.724 23:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 2509959 00:27:51.918 23:32:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:51.918 23:32:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:51.918 23:32:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:51.918 23:32:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:51.918 23:32:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:51.918 23:32:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.918 23:32:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:51.918 23:32:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.298 23:32:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:53.298 00:27:53.298 real 1m16.154s 00:27:53.298 user 4m33.885s 00:27:53.298 sys 0m22.997s 00:27:53.298 23:32:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:53.298 23:32:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:53.298 ************************************ 00:27:53.298 END TEST nvmf_multiconnection 00:27:53.298 ************************************ 00:27:53.298 23:32:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:53.298 23:32:02 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:53.298 23:32:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:53.298 23:32:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:53.298 23:32:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:53.298 ************************************ 00:27:53.298 START TEST nvmf_initiator_timeout 00:27:53.298 ************************************ 00:27:53.298 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:53.298 * Looking for test storage... 00:27:53.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:53.558 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:53.559 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.559 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:53.559 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:53.559 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:53.559 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.559 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.559 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.559 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:53.559 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:53.559 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:27:53.559 23:32:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:58.832 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.832 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:27:58.832 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:58.832 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:58.832 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:58.832 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:58.832 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:58.832 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:27:58.832 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:58.832 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:27:58.832 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:58.833 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:58.833 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:58.833 Found net devices under 0000:86:00.0: cvl_0_0 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:58.833 Found net devices under 0000:86:00.1: cvl_0_1 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:58.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:27:58.833 00:27:58.833 --- 10.0.0.2 ping statistics --- 00:27:58.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.833 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:27:58.833 00:27:58.833 --- 10.0.0.1 ping statistics --- 00:27:58.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.833 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=2524266 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 2524266 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 2524266 ']' 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:58.833 23:32:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:58.833 [2024-07-10 23:32:07.356669] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:27:58.833 [2024-07-10 23:32:07.356759] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.833 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.833 [2024-07-10 23:32:07.466446] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:58.833 [2024-07-10 23:32:07.689939] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.833 [2024-07-10 23:32:07.689983] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.833 [2024-07-10 23:32:07.689995] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.833 [2024-07-10 23:32:07.690003] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.834 [2024-07-10 23:32:07.690012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.834 [2024-07-10 23:32:07.690129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.834 [2024-07-10 23:32:07.690204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.834 [2024-07-10 23:32:07.690234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.834 [2024-07-10 23:32:07.690245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:59.093 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:59.093 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:27:59.093 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:59.093 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:59.093 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:59.352 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.352 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:59.352 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:59.352 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.352 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:59.352 Malloc0 00:27:59.352 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.352 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:59.352 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.352 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:59.352 Delay0 00:27:59.352 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.352 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:59.352 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.352 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:59.353 [2024-07-10 23:32:08.275954] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.353 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.353 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:59.353 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.353 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:59.353 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.353 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.353 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.353 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:59.353 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.353 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:59.353 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.353 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:59.353 [2024-07-10 23:32:08.304221] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.353 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.353 23:32:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:00.769 23:32:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:28:00.769 23:32:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:28:00.769 23:32:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:00.769 23:32:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:00.769 23:32:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:28:02.683 23:32:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:02.683 23:32:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:02.683 23:32:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:28:02.683 23:32:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:02.683 23:32:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:02.683 23:32:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:28:02.683 23:32:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2524939 00:28:02.683 23:32:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:28:02.683 23:32:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:28:02.683 [global] 00:28:02.683 thread=1 00:28:02.683 invalidate=1 00:28:02.683 rw=write 00:28:02.683 time_based=1 00:28:02.683 runtime=60 00:28:02.683 ioengine=libaio 00:28:02.683 direct=1 00:28:02.683 bs=4096 00:28:02.683 iodepth=1 00:28:02.683 norandommap=0 00:28:02.683 numjobs=1 00:28:02.683 00:28:02.683 verify_dump=1 00:28:02.683 verify_backlog=512 00:28:02.683 verify_state_save=0 00:28:02.683 do_verify=1 00:28:02.683 verify=crc32c-intel 00:28:02.683 [job0] 00:28:02.683 filename=/dev/nvme0n1 00:28:02.683 Could not set queue depth (nvme0n1) 00:28:02.943 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:02.943 fio-3.35 00:28:02.943 Starting 1 thread 00:28:05.473 23:32:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:28:05.473 23:32:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.473 23:32:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:05.473 true 00:28:05.473 23:32:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.473 23:32:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:28:05.473 23:32:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.473 23:32:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:05.731 true 00:28:05.731 23:32:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.731 23:32:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:28:05.731 23:32:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.731 23:32:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:05.731 true 00:28:05.731 23:32:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.731 23:32:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:28:05.731 23:32:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.731 23:32:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:05.731 true 00:28:05.731 23:32:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.731 23:32:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:09.021 true 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:09.021 true 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:09.021 true 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:09.021 true 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:09.021 23:32:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2524939 00:29:05.252 00:29:05.252 job0: (groupid=0, jobs=1): err= 0: pid=2525162: Wed Jul 10 23:33:12 2024 00:29:05.252 read: IOPS=68, BW=273KiB/s (280kB/s)(16.0MiB/60040msec) 00:29:05.252 slat (usec): min=6, max=11528, avg=13.20, stdev=191.62 00:29:05.252 clat (usec): min=267, max=41533k, avg=14358.94, stdev=648686.94 00:29:05.252 lat (usec): min=275, max=41533k, avg=14372.14, stdev=648687.35 00:29:05.252 clat percentiles (usec): 00:29:05.252 | 1.00th=[ 281], 5.00th=[ 297], 10.00th=[ 306], 00:29:05.252 | 20.00th=[ 310], 30.00th=[ 314], 40.00th=[ 318], 00:29:05.252 | 50.00th=[ 326], 60.00th=[ 330], 70.00th=[ 334], 00:29:05.252 | 80.00th=[ 347], 90.00th=[ 523], 95.00th=[ 41157], 00:29:05.252 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 42206], 00:29:05.252 | 99.95th=[ 42206], 99.99th=[17112761] 00:29:05.252 write: IOPS=76, BW=307KiB/s (314kB/s)(18.0MiB/60040msec); 0 zone resets 00:29:05.252 slat (nsec): min=10184, max=45174, avg=11487.36, stdev=1957.35 00:29:05.252 clat (usec): min=185, max=1378, avg=224.06, stdev=23.86 00:29:05.252 lat (usec): min=202, max=1388, avg=235.55, stdev=24.07 00:29:05.252 clat percentiles (usec): 00:29:05.252 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:29:05.252 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:29:05.252 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 253], 00:29:05.252 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 363], 99.95th=[ 424], 00:29:05.252 | 99.99th=[ 1385] 00:29:05.252 bw ( KiB/s): min= 4096, max= 8175, per=100.00%, avg=6550.20, stdev=2084.81, samples=5 00:29:05.252 iops : min= 1024, max= 2043, avg=1637.40, stdev=521.06, samples=5 00:29:05.252 lat (usec) : 250=49.06%, 500=45.60%, 750=0.80% 00:29:05.252 lat (msec) : 2=0.01%, 50=4.51%, >=2000=0.01% 00:29:05.252 cpu : usr=0.13%, sys=0.24%, ctx=8711, majf=0, minf=2 00:29:05.252 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:05.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.252 issued rwts: total=4100,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:05.252 00:29:05.252 Run status group 0 (all jobs): 00:29:05.252 READ: bw=273KiB/s (280kB/s), 273KiB/s-273KiB/s (280kB/s-280kB/s), io=16.0MiB (16.8MB), run=60040-60040msec 00:29:05.252 WRITE: bw=307KiB/s (314kB/s), 307KiB/s-307KiB/s (314kB/s-314kB/s), io=18.0MiB (18.9MB), run=60040-60040msec 00:29:05.252 00:29:05.252 Disk stats (read/write): 00:29:05.252 nvme0n1: ios=4195/4608, merge=0/0, ticks=17203/969, in_queue=18172, util=99.90% 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:05.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:29:05.252 nvmf hotplug test: fio successful as expected 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:05.252 rmmod nvme_tcp 00:29:05.252 rmmod nvme_fabrics 00:29:05.252 rmmod nvme_keyring 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 2524266 ']' 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 2524266 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 2524266 ']' 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 2524266 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2524266 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2524266' 00:29:05.252 killing process with pid 2524266 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 2524266 00:29:05.252 23:33:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 2524266 00:29:05.252 23:33:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:05.252 23:33:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:05.252 23:33:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:05.252 23:33:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:05.252 23:33:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:05.252 23:33:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.252 23:33:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:05.252 23:33:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.158 23:33:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:07.158 00:29:07.158 real 1m13.717s 00:29:07.158 user 4m28.795s 00:29:07.158 sys 0m5.593s 00:29:07.158 23:33:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:07.158 23:33:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:07.158 ************************************ 00:29:07.158 END TEST nvmf_initiator_timeout 00:29:07.158 ************************************ 00:29:07.158 23:33:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:07.158 23:33:16 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:29:07.158 23:33:16 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:29:07.158 23:33:16 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:29:07.158 23:33:16 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:29:07.158 23:33:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:12.436 23:33:21 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.436 23:33:21 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:29:12.436 23:33:21 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:12.436 23:33:21 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:12.436 23:33:21 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:12.436 23:33:21 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:12.436 23:33:21 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:12.436 23:33:21 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:12.437 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:12.437 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:12.437 Found net devices under 0000:86:00.0: cvl_0_0 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:12.437 Found net devices under 0000:86:00.1: cvl_0_1 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:29:12.437 23:33:21 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:12.437 23:33:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:12.437 23:33:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.437 23:33:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:12.437 ************************************ 00:29:12.437 START TEST nvmf_perf_adq 00:29:12.437 ************************************ 00:29:12.437 23:33:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:12.695 * Looking for test storage... 00:29:12.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:12.695 23:33:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:18.031 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:18.031 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:18.031 Found net devices under 0000:86:00.0: cvl_0_0 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:18.031 Found net devices under 0000:86:00.1: cvl_0_1 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:29:18.031 23:33:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:29:18.600 23:33:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:29:20.503 23:33:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:29:25.792 23:33:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:29:25.792 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:25.792 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.792 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:25.792 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:25.792 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:25.792 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.792 23:33:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:25.792 23:33:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.792 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:25.792 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:25.792 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:25.793 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:25.793 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:25.793 Found net devices under 0000:86:00.0: cvl_0_0 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:25.793 Found net devices under 0000:86:00.1: cvl_0_1 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:25.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:29:25.793 00:29:25.793 --- 10.0.0.2 ping statistics --- 00:29:25.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.793 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:29:25.793 00:29:25.793 --- 10.0.0.1 ping statistics --- 00:29:25.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.793 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2543236 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2543236 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2543236 ']' 00:29:25.793 23:33:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.794 23:33:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:25.794 23:33:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.794 23:33:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:25.794 23:33:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.794 [2024-07-10 23:33:34.844882] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:29:25.794 [2024-07-10 23:33:34.844972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.053 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.053 [2024-07-10 23:33:34.956149] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:26.312 [2024-07-10 23:33:35.182602] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.312 [2024-07-10 23:33:35.182641] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.312 [2024-07-10 23:33:35.182653] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:26.312 [2024-07-10 23:33:35.182663] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:26.312 [2024-07-10 23:33:35.182672] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.312 [2024-07-10 23:33:35.182738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.312 [2024-07-10 23:33:35.182756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:26.312 [2024-07-10 23:33:35.182855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.312 [2024-07-10 23:33:35.182863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:26.571 23:33:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:26.571 23:33:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:29:26.571 23:33:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:26.571 23:33:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:26.571 23:33:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:26.830 23:33:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.830 23:33:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:29:26.830 23:33:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:26.830 23:33:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:26.830 23:33:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.830 23:33:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:26.830 23:33:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.830 23:33:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:26.830 23:33:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:29:26.830 23:33:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.830 23:33:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:26.830 23:33:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.830 23:33:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:26.830 23:33:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.830 23:33:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:27.089 23:33:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.089 23:33:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:29:27.089 23:33:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.089 23:33:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:27.089 [2024-07-10 23:33:36.132433] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.089 23:33:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.089 23:33:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:27.089 23:33:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.089 23:33:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:27.349 Malloc1 00:29:27.349 23:33:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.349 23:33:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:27.349 23:33:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.349 23:33:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:27.349 23:33:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.349 23:33:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:27.349 23:33:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.349 23:33:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:27.349 23:33:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.349 23:33:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:27.349 23:33:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.349 23:33:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:27.349 [2024-07-10 23:33:36.253481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.349 23:33:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.349 23:33:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2543501 00:29:27.349 23:33:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:29:27.349 23:33:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:27.349 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.250 23:33:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:29:29.250 23:33:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.250 23:33:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.250 23:33:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.250 23:33:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:29:29.250 "tick_rate": 2300000000, 00:29:29.250 "poll_groups": [ 00:29:29.250 { 00:29:29.250 "name": "nvmf_tgt_poll_group_000", 00:29:29.250 "admin_qpairs": 1, 00:29:29.250 "io_qpairs": 1, 00:29:29.250 "current_admin_qpairs": 1, 00:29:29.250 "current_io_qpairs": 1, 00:29:29.250 "pending_bdev_io": 0, 00:29:29.250 "completed_nvme_io": 18450, 00:29:29.250 "transports": [ 00:29:29.250 { 00:29:29.250 "trtype": "TCP" 00:29:29.250 } 00:29:29.250 ] 00:29:29.250 }, 00:29:29.250 { 00:29:29.250 "name": "nvmf_tgt_poll_group_001", 00:29:29.250 "admin_qpairs": 0, 00:29:29.250 "io_qpairs": 1, 00:29:29.250 "current_admin_qpairs": 0, 00:29:29.250 "current_io_qpairs": 1, 00:29:29.250 "pending_bdev_io": 0, 00:29:29.250 "completed_nvme_io": 18753, 00:29:29.250 "transports": [ 00:29:29.250 { 00:29:29.250 "trtype": "TCP" 00:29:29.250 } 00:29:29.250 ] 00:29:29.250 }, 00:29:29.250 { 00:29:29.250 "name": "nvmf_tgt_poll_group_002", 00:29:29.250 "admin_qpairs": 0, 00:29:29.250 "io_qpairs": 1, 00:29:29.250 "current_admin_qpairs": 0, 00:29:29.250 "current_io_qpairs": 1, 00:29:29.250 "pending_bdev_io": 0, 00:29:29.250 "completed_nvme_io": 18770, 00:29:29.250 "transports": [ 00:29:29.250 { 00:29:29.250 "trtype": "TCP" 00:29:29.250 } 00:29:29.250 ] 00:29:29.250 }, 00:29:29.250 { 00:29:29.250 "name": "nvmf_tgt_poll_group_003", 00:29:29.250 "admin_qpairs": 0, 00:29:29.250 "io_qpairs": 1, 00:29:29.250 "current_admin_qpairs": 0, 00:29:29.250 "current_io_qpairs": 1, 00:29:29.250 "pending_bdev_io": 0, 00:29:29.250 "completed_nvme_io": 18386, 00:29:29.250 "transports": [ 00:29:29.250 { 00:29:29.250 "trtype": "TCP" 00:29:29.250 } 00:29:29.250 ] 00:29:29.250 } 00:29:29.250 ] 00:29:29.250 }' 00:29:29.250 23:33:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:29:29.250 23:33:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:29:29.509 23:33:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:29:29.510 23:33:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:29:29.510 23:33:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2543501 00:29:37.623 Initializing NVMe Controllers 00:29:37.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:37.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:37.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:37.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:37.623 Initialization complete. Launching workers. 00:29:37.623 ======================================================== 00:29:37.623 Latency(us) 00:29:37.623 Device Information : IOPS MiB/s Average min max 00:29:37.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10165.53 39.71 6297.23 2668.20 9373.30 00:29:37.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10209.43 39.88 6268.77 2089.38 12232.40 00:29:37.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10023.14 39.15 6386.91 3067.26 10418.48 00:29:37.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9935.34 38.81 6441.15 2699.89 12144.45 00:29:37.623 ======================================================== 00:29:37.623 Total : 40333.44 157.55 6347.77 2089.38 12232.40 00:29:37.623 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:37.623 rmmod nvme_tcp 00:29:37.623 rmmod nvme_fabrics 00:29:37.623 rmmod nvme_keyring 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2543236 ']' 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2543236 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2543236 ']' 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2543236 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2543236 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2543236' 00:29:37.623 killing process with pid 2543236 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2543236 00:29:37.623 23:33:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2543236 00:29:39.525 23:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:39.525 23:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:39.525 23:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:39.525 23:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:39.525 23:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:39.525 23:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.525 23:33:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:39.525 23:33:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.429 23:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:41.429 23:33:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:29:41.429 23:33:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:29:42.367 23:33:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:29:44.311 23:33:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:49.588 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:49.588 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:49.588 Found net devices under 0000:86:00.0: cvl_0_0 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:49.588 Found net devices under 0000:86:00.1: cvl_0_1 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:49.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:49.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:29:49.588 00:29:49.588 --- 10.0.0.2 ping statistics --- 00:29:49.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.588 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:49.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:49.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:29:49.588 00:29:49.588 --- 10.0.0.1 ping statistics --- 00:29:49.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.588 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:49.588 23:33:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:29:49.589 23:33:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:49.589 23:33:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:49.589 23:33:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:49.589 net.core.busy_poll = 1 00:29:49.589 23:33:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:49.589 net.core.busy_read = 1 00:29:49.589 23:33:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:49.589 23:33:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:49.848 23:33:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:49.848 23:33:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:49.848 23:33:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:49.848 23:33:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:49.848 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:49.848 23:33:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:49.848 23:33:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:49.848 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2547497 00:29:49.848 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2547497 00:29:49.848 23:33:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:49.848 23:33:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2547497 ']' 00:29:49.848 23:33:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.848 23:33:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:49.848 23:33:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.848 23:33:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:49.848 23:33:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:49.848 [2024-07-10 23:33:58.852238] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:29:49.848 [2024-07-10 23:33:58.852327] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.848 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.106 [2024-07-10 23:33:58.959997] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:50.365 [2024-07-10 23:33:59.176460] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.365 [2024-07-10 23:33:59.176508] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.365 [2024-07-10 23:33:59.176522] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.365 [2024-07-10 23:33:59.176531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.365 [2024-07-10 23:33:59.176540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.365 [2024-07-10 23:33:59.176615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.365 [2024-07-10 23:33:59.176696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.365 [2024-07-10 23:33:59.176742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.365 [2024-07-10 23:33:59.176752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.625 23:33:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:50.625 23:33:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:29:50.625 23:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:50.625 23:33:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:50.625 23:33:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:50.625 23:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:50.625 23:33:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:29:50.625 23:33:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:50.625 23:33:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:50.625 23:33:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.625 23:33:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:50.625 23:33:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.884 23:33:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:50.884 23:33:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:50.884 23:33:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.884 23:33:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:50.884 23:33:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.884 23:33:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:50.884 23:33:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.884 23:33:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:51.144 23:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.144 23:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:51.144 23:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.144 23:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:51.144 [2024-07-10 23:34:00.123612] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.144 23:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.144 23:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:51.144 23:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.144 23:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:51.404 Malloc1 00:29:51.404 23:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.404 23:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:51.404 23:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.404 23:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:51.404 23:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.404 23:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:51.404 23:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.404 23:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:51.404 23:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.404 23:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:51.404 23:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.404 23:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:51.404 [2024-07-10 23:34:00.247579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.404 23:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.404 23:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2547755 00:29:51.404 23:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:29:51.404 23:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:51.404 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.306 23:34:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:29:53.306 23:34:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.306 23:34:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:53.306 23:34:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.306 23:34:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:29:53.306 "tick_rate": 2300000000, 00:29:53.306 "poll_groups": [ 00:29:53.306 { 00:29:53.306 "name": "nvmf_tgt_poll_group_000", 00:29:53.306 "admin_qpairs": 1, 00:29:53.306 "io_qpairs": 4, 00:29:53.306 "current_admin_qpairs": 1, 00:29:53.306 "current_io_qpairs": 4, 00:29:53.306 "pending_bdev_io": 0, 00:29:53.306 "completed_nvme_io": 36566, 00:29:53.306 "transports": [ 00:29:53.306 { 00:29:53.306 "trtype": "TCP" 00:29:53.306 } 00:29:53.306 ] 00:29:53.306 }, 00:29:53.306 { 00:29:53.306 "name": "nvmf_tgt_poll_group_001", 00:29:53.306 "admin_qpairs": 0, 00:29:53.306 "io_qpairs": 0, 00:29:53.306 "current_admin_qpairs": 0, 00:29:53.306 "current_io_qpairs": 0, 00:29:53.306 "pending_bdev_io": 0, 00:29:53.306 "completed_nvme_io": 0, 00:29:53.306 "transports": [ 00:29:53.306 { 00:29:53.306 "trtype": "TCP" 00:29:53.306 } 00:29:53.306 ] 00:29:53.306 }, 00:29:53.306 { 00:29:53.306 "name": "nvmf_tgt_poll_group_002", 00:29:53.306 "admin_qpairs": 0, 00:29:53.306 "io_qpairs": 0, 00:29:53.306 "current_admin_qpairs": 0, 00:29:53.306 "current_io_qpairs": 0, 00:29:53.306 "pending_bdev_io": 0, 00:29:53.306 "completed_nvme_io": 0, 00:29:53.306 "transports": [ 00:29:53.306 { 00:29:53.306 "trtype": "TCP" 00:29:53.306 } 00:29:53.306 ] 00:29:53.306 }, 00:29:53.306 { 00:29:53.306 "name": "nvmf_tgt_poll_group_003", 00:29:53.306 "admin_qpairs": 0, 00:29:53.306 "io_qpairs": 0, 00:29:53.306 "current_admin_qpairs": 0, 00:29:53.306 "current_io_qpairs": 0, 00:29:53.306 "pending_bdev_io": 0, 00:29:53.306 "completed_nvme_io": 0, 00:29:53.306 "transports": [ 00:29:53.306 { 00:29:53.306 "trtype": "TCP" 00:29:53.306 } 00:29:53.306 ] 00:29:53.306 } 00:29:53.306 ] 00:29:53.306 }' 00:29:53.306 23:34:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:53.306 23:34:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:29:53.306 23:34:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:29:53.306 23:34:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:29:53.306 23:34:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2547755 00:30:01.423 Initializing NVMe Controllers 00:30:01.423 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:01.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:30:01.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:30:01.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:30:01.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:30:01.423 Initialization complete. Launching workers. 00:30:01.423 ======================================================== 00:30:01.423 Latency(us) 00:30:01.423 Device Information : IOPS MiB/s Average min max 00:30:01.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5025.50 19.63 12736.72 1689.35 62137.84 00:30:01.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4824.60 18.85 13304.99 2058.10 57503.24 00:30:01.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4914.20 19.20 13023.44 2327.03 59589.01 00:30:01.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5115.40 19.98 12550.41 1855.43 58237.18 00:30:01.423 ======================================================== 00:30:01.423 Total : 19879.69 77.66 12897.57 1689.35 62137.84 00:30:01.423 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:01.681 rmmod nvme_tcp 00:30:01.681 rmmod nvme_fabrics 00:30:01.681 rmmod nvme_keyring 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2547497 ']' 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2547497 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2547497 ']' 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2547497 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2547497 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2547497' 00:30:01.681 killing process with pid 2547497 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2547497 00:30:01.681 23:34:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2547497 00:30:03.176 23:34:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:03.176 23:34:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:03.176 23:34:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:03.176 23:34:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:03.176 23:34:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:03.176 23:34:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.176 23:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:03.176 23:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.713 23:34:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:05.713 23:34:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:05.713 00:30:05.713 real 0m52.794s 00:30:05.713 user 2m59.211s 00:30:05.713 sys 0m9.130s 00:30:05.713 23:34:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:05.713 23:34:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:05.713 ************************************ 00:30:05.713 END TEST nvmf_perf_adq 00:30:05.713 ************************************ 00:30:05.713 23:34:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:05.713 23:34:14 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:05.713 23:34:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:05.713 23:34:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:05.713 23:34:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:05.713 ************************************ 00:30:05.713 START TEST nvmf_shutdown 00:30:05.713 ************************************ 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:05.713 * Looking for test storage... 00:30:05.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:05.713 ************************************ 00:30:05.713 START TEST nvmf_shutdown_tc1 00:30:05.713 ************************************ 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.713 23:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:05.714 23:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:05.714 23:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:05.714 23:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:10.992 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:10.992 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.992 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:10.993 Found net devices under 0000:86:00.0: cvl_0_0 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:10.993 Found net devices under 0000:86:00.1: cvl_0_1 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:10.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:10.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:30:10.993 00:30:10.993 --- 10.0.0.2 ping statistics --- 00:30:10.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.993 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:10.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:10.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:30:10.993 00:30:10.993 --- 10.0.0.1 ping statistics --- 00:30:10.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.993 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2553083 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2553083 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2553083 ']' 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:10.993 23:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:10.993 [2024-07-10 23:34:19.877579] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:30:10.993 [2024-07-10 23:34:19.877668] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:10.993 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.993 [2024-07-10 23:34:19.991193] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:11.252 [2024-07-10 23:34:20.218022] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.252 [2024-07-10 23:34:20.218067] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.252 [2024-07-10 23:34:20.218079] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.252 [2024-07-10 23:34:20.218087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.252 [2024-07-10 23:34:20.218096] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.252 [2024-07-10 23:34:20.218221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:11.252 [2024-07-10 23:34:20.218249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:11.252 [2024-07-10 23:34:20.218333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.252 [2024-07-10 23:34:20.218357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:11.821 [2024-07-10 23:34:20.697042] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.821 23:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:11.821 Malloc1 00:30:11.821 [2024-07-10 23:34:20.859178] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.080 Malloc2 00:30:12.080 Malloc3 00:30:12.340 Malloc4 00:30:12.340 Malloc5 00:30:12.340 Malloc6 00:30:12.599 Malloc7 00:30:12.599 Malloc8 00:30:12.859 Malloc9 00:30:12.859 Malloc10 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2553469 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2553469 /var/tmp/bdevperf.sock 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2553469 ']' 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:12.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:12.859 { 00:30:12.859 "params": { 00:30:12.859 "name": "Nvme$subsystem", 00:30:12.859 "trtype": "$TEST_TRANSPORT", 00:30:12.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.859 "adrfam": "ipv4", 00:30:12.859 "trsvcid": "$NVMF_PORT", 00:30:12.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.859 "hdgst": ${hdgst:-false}, 00:30:12.859 "ddgst": ${ddgst:-false} 00:30:12.859 }, 00:30:12.859 "method": "bdev_nvme_attach_controller" 00:30:12.859 } 00:30:12.859 EOF 00:30:12.859 )") 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:12.859 { 00:30:12.859 "params": { 00:30:12.859 "name": "Nvme$subsystem", 00:30:12.859 "trtype": "$TEST_TRANSPORT", 00:30:12.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.859 "adrfam": "ipv4", 00:30:12.859 "trsvcid": "$NVMF_PORT", 00:30:12.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.859 "hdgst": ${hdgst:-false}, 00:30:12.859 "ddgst": ${ddgst:-false} 00:30:12.859 }, 00:30:12.859 "method": "bdev_nvme_attach_controller" 00:30:12.859 } 00:30:12.859 EOF 00:30:12.859 )") 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:12.859 { 00:30:12.859 "params": { 00:30:12.859 "name": "Nvme$subsystem", 00:30:12.859 "trtype": "$TEST_TRANSPORT", 00:30:12.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.859 "adrfam": "ipv4", 00:30:12.859 "trsvcid": "$NVMF_PORT", 00:30:12.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.859 "hdgst": ${hdgst:-false}, 00:30:12.859 "ddgst": ${ddgst:-false} 00:30:12.859 }, 00:30:12.859 "method": "bdev_nvme_attach_controller" 00:30:12.859 } 00:30:12.859 EOF 00:30:12.859 )") 00:30:12.859 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:13.120 { 00:30:13.120 "params": { 00:30:13.120 "name": "Nvme$subsystem", 00:30:13.120 "trtype": "$TEST_TRANSPORT", 00:30:13.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.120 "adrfam": "ipv4", 00:30:13.120 "trsvcid": "$NVMF_PORT", 00:30:13.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.120 "hdgst": ${hdgst:-false}, 00:30:13.120 "ddgst": ${ddgst:-false} 00:30:13.120 }, 00:30:13.120 "method": "bdev_nvme_attach_controller" 00:30:13.120 } 00:30:13.120 EOF 00:30:13.120 )") 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:13.120 { 00:30:13.120 "params": { 00:30:13.120 "name": "Nvme$subsystem", 00:30:13.120 "trtype": "$TEST_TRANSPORT", 00:30:13.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.120 "adrfam": "ipv4", 00:30:13.120 "trsvcid": "$NVMF_PORT", 00:30:13.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.120 "hdgst": ${hdgst:-false}, 00:30:13.120 "ddgst": ${ddgst:-false} 00:30:13.120 }, 00:30:13.120 "method": "bdev_nvme_attach_controller" 00:30:13.120 } 00:30:13.120 EOF 00:30:13.120 )") 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:13.120 { 00:30:13.120 "params": { 00:30:13.120 "name": "Nvme$subsystem", 00:30:13.120 "trtype": "$TEST_TRANSPORT", 00:30:13.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.120 "adrfam": "ipv4", 00:30:13.120 "trsvcid": "$NVMF_PORT", 00:30:13.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.120 "hdgst": ${hdgst:-false}, 00:30:13.120 "ddgst": ${ddgst:-false} 00:30:13.120 }, 00:30:13.120 "method": "bdev_nvme_attach_controller" 00:30:13.120 } 00:30:13.120 EOF 00:30:13.120 )") 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:13.120 { 00:30:13.120 "params": { 00:30:13.120 "name": "Nvme$subsystem", 00:30:13.120 "trtype": "$TEST_TRANSPORT", 00:30:13.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.120 "adrfam": "ipv4", 00:30:13.120 "trsvcid": "$NVMF_PORT", 00:30:13.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.120 "hdgst": ${hdgst:-false}, 00:30:13.120 "ddgst": ${ddgst:-false} 00:30:13.120 }, 00:30:13.120 "method": "bdev_nvme_attach_controller" 00:30:13.120 } 00:30:13.120 EOF 00:30:13.120 )") 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:13.120 { 00:30:13.120 "params": { 00:30:13.120 "name": "Nvme$subsystem", 00:30:13.120 "trtype": "$TEST_TRANSPORT", 00:30:13.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.120 "adrfam": "ipv4", 00:30:13.120 "trsvcid": "$NVMF_PORT", 00:30:13.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.120 "hdgst": ${hdgst:-false}, 00:30:13.120 "ddgst": ${ddgst:-false} 00:30:13.120 }, 00:30:13.120 "method": "bdev_nvme_attach_controller" 00:30:13.120 } 00:30:13.120 EOF 00:30:13.120 )") 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:13.120 { 00:30:13.120 "params": { 00:30:13.120 "name": "Nvme$subsystem", 00:30:13.120 "trtype": "$TEST_TRANSPORT", 00:30:13.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.120 "adrfam": "ipv4", 00:30:13.120 "trsvcid": "$NVMF_PORT", 00:30:13.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.120 "hdgst": ${hdgst:-false}, 00:30:13.120 "ddgst": ${ddgst:-false} 00:30:13.120 }, 00:30:13.120 "method": "bdev_nvme_attach_controller" 00:30:13.120 } 00:30:13.120 EOF 00:30:13.120 )") 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:13.120 { 00:30:13.120 "params": { 00:30:13.120 "name": "Nvme$subsystem", 00:30:13.120 "trtype": "$TEST_TRANSPORT", 00:30:13.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.120 "adrfam": "ipv4", 00:30:13.120 "trsvcid": "$NVMF_PORT", 00:30:13.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.120 "hdgst": ${hdgst:-false}, 00:30:13.120 "ddgst": ${ddgst:-false} 00:30:13.120 }, 00:30:13.120 "method": "bdev_nvme_attach_controller" 00:30:13.120 } 00:30:13.120 EOF 00:30:13.120 )") 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:30:13.120 23:34:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:13.120 "params": { 00:30:13.120 "name": "Nvme1", 00:30:13.120 "trtype": "tcp", 00:30:13.120 "traddr": "10.0.0.2", 00:30:13.120 "adrfam": "ipv4", 00:30:13.120 "trsvcid": "4420", 00:30:13.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:13.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:13.120 "hdgst": false, 00:30:13.120 "ddgst": false 00:30:13.120 }, 00:30:13.120 "method": "bdev_nvme_attach_controller" 00:30:13.120 },{ 00:30:13.120 "params": { 00:30:13.120 "name": "Nvme2", 00:30:13.120 "trtype": "tcp", 00:30:13.120 "traddr": "10.0.0.2", 00:30:13.120 "adrfam": "ipv4", 00:30:13.120 "trsvcid": "4420", 00:30:13.120 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:13.120 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:13.120 "hdgst": false, 00:30:13.120 "ddgst": false 00:30:13.120 }, 00:30:13.120 "method": "bdev_nvme_attach_controller" 00:30:13.120 },{ 00:30:13.120 "params": { 00:30:13.120 "name": "Nvme3", 00:30:13.120 "trtype": "tcp", 00:30:13.120 "traddr": "10.0.0.2", 00:30:13.120 "adrfam": "ipv4", 00:30:13.120 "trsvcid": "4420", 00:30:13.120 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:13.120 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:13.120 "hdgst": false, 00:30:13.120 "ddgst": false 00:30:13.120 }, 00:30:13.120 "method": "bdev_nvme_attach_controller" 00:30:13.120 },{ 00:30:13.120 "params": { 00:30:13.120 "name": "Nvme4", 00:30:13.120 "trtype": "tcp", 00:30:13.120 "traddr": "10.0.0.2", 00:30:13.120 "adrfam": "ipv4", 00:30:13.120 "trsvcid": "4420", 00:30:13.120 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:13.120 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:13.120 "hdgst": false, 00:30:13.120 "ddgst": false 00:30:13.120 }, 00:30:13.120 "method": "bdev_nvme_attach_controller" 00:30:13.120 },{ 00:30:13.120 "params": { 00:30:13.120 "name": "Nvme5", 00:30:13.120 "trtype": "tcp", 00:30:13.120 "traddr": "10.0.0.2", 00:30:13.120 "adrfam": "ipv4", 00:30:13.120 "trsvcid": "4420", 00:30:13.120 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:13.120 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:13.120 "hdgst": false, 00:30:13.120 "ddgst": false 00:30:13.120 }, 00:30:13.120 "method": "bdev_nvme_attach_controller" 00:30:13.120 },{ 00:30:13.120 "params": { 00:30:13.120 "name": "Nvme6", 00:30:13.120 "trtype": "tcp", 00:30:13.120 "traddr": "10.0.0.2", 00:30:13.120 "adrfam": "ipv4", 00:30:13.120 "trsvcid": "4420", 00:30:13.120 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:13.120 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:13.120 "hdgst": false, 00:30:13.120 "ddgst": false 00:30:13.120 }, 00:30:13.120 "method": "bdev_nvme_attach_controller" 00:30:13.120 },{ 00:30:13.120 "params": { 00:30:13.120 "name": "Nvme7", 00:30:13.120 "trtype": "tcp", 00:30:13.120 "traddr": "10.0.0.2", 00:30:13.120 "adrfam": "ipv4", 00:30:13.120 "trsvcid": "4420", 00:30:13.120 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:13.120 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:13.120 "hdgst": false, 00:30:13.120 "ddgst": false 00:30:13.120 }, 00:30:13.120 "method": "bdev_nvme_attach_controller" 00:30:13.121 },{ 00:30:13.121 "params": { 00:30:13.121 "name": "Nvme8", 00:30:13.121 "trtype": "tcp", 00:30:13.121 "traddr": "10.0.0.2", 00:30:13.121 "adrfam": "ipv4", 00:30:13.121 "trsvcid": "4420", 00:30:13.121 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:13.121 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:13.121 "hdgst": false, 00:30:13.121 "ddgst": false 00:30:13.121 }, 00:30:13.121 "method": "bdev_nvme_attach_controller" 00:30:13.121 },{ 00:30:13.121 "params": { 00:30:13.121 "name": "Nvme9", 00:30:13.121 "trtype": "tcp", 00:30:13.121 "traddr": "10.0.0.2", 00:30:13.121 "adrfam": "ipv4", 00:30:13.121 "trsvcid": "4420", 00:30:13.121 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:13.121 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:13.121 "hdgst": false, 00:30:13.121 "ddgst": false 00:30:13.121 }, 00:30:13.121 "method": "bdev_nvme_attach_controller" 00:30:13.121 },{ 00:30:13.121 "params": { 00:30:13.121 "name": "Nvme10", 00:30:13.121 "trtype": "tcp", 00:30:13.121 "traddr": "10.0.0.2", 00:30:13.121 "adrfam": "ipv4", 00:30:13.121 "trsvcid": "4420", 00:30:13.121 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:13.121 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:13.121 "hdgst": false, 00:30:13.121 "ddgst": false 00:30:13.121 }, 00:30:13.121 "method": "bdev_nvme_attach_controller" 00:30:13.121 }' 00:30:13.121 [2024-07-10 23:34:21.976868] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:30:13.121 [2024-07-10 23:34:21.976954] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:13.121 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.121 [2024-07-10 23:34:22.084228] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.380 [2024-07-10 23:34:22.321049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.915 23:34:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:15.915 23:34:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:30:15.915 23:34:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:15.915 23:34:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.915 23:34:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:15.915 23:34:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.915 23:34:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2553469 00:30:15.915 23:34:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:30:15.915 23:34:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:30:16.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2553469 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2553083 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:16.484 { 00:30:16.484 "params": { 00:30:16.484 "name": "Nvme$subsystem", 00:30:16.484 "trtype": "$TEST_TRANSPORT", 00:30:16.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.484 "adrfam": "ipv4", 00:30:16.484 "trsvcid": "$NVMF_PORT", 00:30:16.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.484 "hdgst": ${hdgst:-false}, 00:30:16.484 "ddgst": ${ddgst:-false} 00:30:16.484 }, 00:30:16.484 "method": "bdev_nvme_attach_controller" 00:30:16.484 } 00:30:16.484 EOF 00:30:16.484 )") 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:16.484 { 00:30:16.484 "params": { 00:30:16.484 "name": "Nvme$subsystem", 00:30:16.484 "trtype": "$TEST_TRANSPORT", 00:30:16.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.484 "adrfam": "ipv4", 00:30:16.484 "trsvcid": "$NVMF_PORT", 00:30:16.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.484 "hdgst": ${hdgst:-false}, 00:30:16.484 "ddgst": ${ddgst:-false} 00:30:16.484 }, 00:30:16.484 "method": "bdev_nvme_attach_controller" 00:30:16.484 } 00:30:16.484 EOF 00:30:16.484 )") 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:16.484 { 00:30:16.484 "params": { 00:30:16.484 "name": "Nvme$subsystem", 00:30:16.484 "trtype": "$TEST_TRANSPORT", 00:30:16.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.484 "adrfam": "ipv4", 00:30:16.484 "trsvcid": "$NVMF_PORT", 00:30:16.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.484 "hdgst": ${hdgst:-false}, 00:30:16.484 "ddgst": ${ddgst:-false} 00:30:16.484 }, 00:30:16.484 "method": "bdev_nvme_attach_controller" 00:30:16.484 } 00:30:16.484 EOF 00:30:16.484 )") 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:16.484 { 00:30:16.484 "params": { 00:30:16.484 "name": "Nvme$subsystem", 00:30:16.484 "trtype": "$TEST_TRANSPORT", 00:30:16.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.484 "adrfam": "ipv4", 00:30:16.484 "trsvcid": "$NVMF_PORT", 00:30:16.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.484 "hdgst": ${hdgst:-false}, 00:30:16.484 "ddgst": ${ddgst:-false} 00:30:16.484 }, 00:30:16.484 "method": "bdev_nvme_attach_controller" 00:30:16.484 } 00:30:16.484 EOF 00:30:16.484 )") 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:16.484 { 00:30:16.484 "params": { 00:30:16.484 "name": "Nvme$subsystem", 00:30:16.484 "trtype": "$TEST_TRANSPORT", 00:30:16.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.484 "adrfam": "ipv4", 00:30:16.484 "trsvcid": "$NVMF_PORT", 00:30:16.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.484 "hdgst": ${hdgst:-false}, 00:30:16.484 "ddgst": ${ddgst:-false} 00:30:16.484 }, 00:30:16.484 "method": "bdev_nvme_attach_controller" 00:30:16.484 } 00:30:16.484 EOF 00:30:16.484 )") 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:16.484 { 00:30:16.484 "params": { 00:30:16.484 "name": "Nvme$subsystem", 00:30:16.484 "trtype": "$TEST_TRANSPORT", 00:30:16.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.484 "adrfam": "ipv4", 00:30:16.484 "trsvcid": "$NVMF_PORT", 00:30:16.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.484 "hdgst": ${hdgst:-false}, 00:30:16.484 "ddgst": ${ddgst:-false} 00:30:16.484 }, 00:30:16.484 "method": "bdev_nvme_attach_controller" 00:30:16.484 } 00:30:16.484 EOF 00:30:16.484 )") 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:16.484 { 00:30:16.484 "params": { 00:30:16.484 "name": "Nvme$subsystem", 00:30:16.484 "trtype": "$TEST_TRANSPORT", 00:30:16.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.484 "adrfam": "ipv4", 00:30:16.484 "trsvcid": "$NVMF_PORT", 00:30:16.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.484 "hdgst": ${hdgst:-false}, 00:30:16.484 "ddgst": ${ddgst:-false} 00:30:16.484 }, 00:30:16.484 "method": "bdev_nvme_attach_controller" 00:30:16.484 } 00:30:16.484 EOF 00:30:16.484 )") 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:16.484 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:16.484 { 00:30:16.484 "params": { 00:30:16.484 "name": "Nvme$subsystem", 00:30:16.484 "trtype": "$TEST_TRANSPORT", 00:30:16.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.484 "adrfam": "ipv4", 00:30:16.484 "trsvcid": "$NVMF_PORT", 00:30:16.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.484 "hdgst": ${hdgst:-false}, 00:30:16.484 "ddgst": ${ddgst:-false} 00:30:16.485 }, 00:30:16.485 "method": "bdev_nvme_attach_controller" 00:30:16.485 } 00:30:16.485 EOF 00:30:16.485 )") 00:30:16.485 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:16.485 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:16.485 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:16.485 { 00:30:16.485 "params": { 00:30:16.485 "name": "Nvme$subsystem", 00:30:16.485 "trtype": "$TEST_TRANSPORT", 00:30:16.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.485 "adrfam": "ipv4", 00:30:16.485 "trsvcid": "$NVMF_PORT", 00:30:16.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.485 "hdgst": ${hdgst:-false}, 00:30:16.485 "ddgst": ${ddgst:-false} 00:30:16.485 }, 00:30:16.485 "method": "bdev_nvme_attach_controller" 00:30:16.485 } 00:30:16.485 EOF 00:30:16.485 )") 00:30:16.485 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:16.485 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:16.485 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:16.485 { 00:30:16.485 "params": { 00:30:16.485 "name": "Nvme$subsystem", 00:30:16.485 "trtype": "$TEST_TRANSPORT", 00:30:16.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.485 "adrfam": "ipv4", 00:30:16.485 "trsvcid": "$NVMF_PORT", 00:30:16.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.485 "hdgst": ${hdgst:-false}, 00:30:16.485 "ddgst": ${ddgst:-false} 00:30:16.485 }, 00:30:16.485 "method": "bdev_nvme_attach_controller" 00:30:16.485 } 00:30:16.485 EOF 00:30:16.485 )") 00:30:16.485 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:16.485 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:30:16.485 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:30:16.485 23:34:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:16.485 "params": { 00:30:16.485 "name": "Nvme1", 00:30:16.485 "trtype": "tcp", 00:30:16.485 "traddr": "10.0.0.2", 00:30:16.485 "adrfam": "ipv4", 00:30:16.485 "trsvcid": "4420", 00:30:16.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:16.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:16.485 "hdgst": false, 00:30:16.485 "ddgst": false 00:30:16.485 }, 00:30:16.485 "method": "bdev_nvme_attach_controller" 00:30:16.485 },{ 00:30:16.485 "params": { 00:30:16.485 "name": "Nvme2", 00:30:16.485 "trtype": "tcp", 00:30:16.485 "traddr": "10.0.0.2", 00:30:16.485 "adrfam": "ipv4", 00:30:16.485 "trsvcid": "4420", 00:30:16.485 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:16.485 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:16.485 "hdgst": false, 00:30:16.485 "ddgst": false 00:30:16.485 }, 00:30:16.485 "method": "bdev_nvme_attach_controller" 00:30:16.485 },{ 00:30:16.485 "params": { 00:30:16.485 "name": "Nvme3", 00:30:16.485 "trtype": "tcp", 00:30:16.485 "traddr": "10.0.0.2", 00:30:16.485 "adrfam": "ipv4", 00:30:16.485 "trsvcid": "4420", 00:30:16.485 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:16.485 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:16.485 "hdgst": false, 00:30:16.485 "ddgst": false 00:30:16.485 }, 00:30:16.485 "method": "bdev_nvme_attach_controller" 00:30:16.485 },{ 00:30:16.485 "params": { 00:30:16.485 "name": "Nvme4", 00:30:16.485 "trtype": "tcp", 00:30:16.485 "traddr": "10.0.0.2", 00:30:16.485 "adrfam": "ipv4", 00:30:16.485 "trsvcid": "4420", 00:30:16.485 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:16.485 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:16.485 "hdgst": false, 00:30:16.485 "ddgst": false 00:30:16.485 }, 00:30:16.485 "method": "bdev_nvme_attach_controller" 00:30:16.485 },{ 00:30:16.485 "params": { 00:30:16.485 "name": "Nvme5", 00:30:16.485 "trtype": "tcp", 00:30:16.485 "traddr": "10.0.0.2", 00:30:16.485 "adrfam": "ipv4", 00:30:16.485 "trsvcid": "4420", 00:30:16.485 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:16.485 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:16.485 "hdgst": false, 00:30:16.485 "ddgst": false 00:30:16.485 }, 00:30:16.485 "method": "bdev_nvme_attach_controller" 00:30:16.485 },{ 00:30:16.485 "params": { 00:30:16.485 "name": "Nvme6", 00:30:16.485 "trtype": "tcp", 00:30:16.485 "traddr": "10.0.0.2", 00:30:16.485 "adrfam": "ipv4", 00:30:16.485 "trsvcid": "4420", 00:30:16.485 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:16.485 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:16.485 "hdgst": false, 00:30:16.485 "ddgst": false 00:30:16.485 }, 00:30:16.485 "method": "bdev_nvme_attach_controller" 00:30:16.485 },{ 00:30:16.485 "params": { 00:30:16.485 "name": "Nvme7", 00:30:16.485 "trtype": "tcp", 00:30:16.485 "traddr": "10.0.0.2", 00:30:16.485 "adrfam": "ipv4", 00:30:16.485 "trsvcid": "4420", 00:30:16.485 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:16.485 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:16.485 "hdgst": false, 00:30:16.485 "ddgst": false 00:30:16.485 }, 00:30:16.485 "method": "bdev_nvme_attach_controller" 00:30:16.485 },{ 00:30:16.485 "params": { 00:30:16.485 "name": "Nvme8", 00:30:16.485 "trtype": "tcp", 00:30:16.485 "traddr": "10.0.0.2", 00:30:16.485 "adrfam": "ipv4", 00:30:16.485 "trsvcid": "4420", 00:30:16.485 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:16.485 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:16.485 "hdgst": false, 00:30:16.485 "ddgst": false 00:30:16.485 }, 00:30:16.485 "method": "bdev_nvme_attach_controller" 00:30:16.485 },{ 00:30:16.485 "params": { 00:30:16.485 "name": "Nvme9", 00:30:16.485 "trtype": "tcp", 00:30:16.485 "traddr": "10.0.0.2", 00:30:16.485 "adrfam": "ipv4", 00:30:16.485 "trsvcid": "4420", 00:30:16.485 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:16.485 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:16.485 "hdgst": false, 00:30:16.485 "ddgst": false 00:30:16.485 }, 00:30:16.485 "method": "bdev_nvme_attach_controller" 00:30:16.485 },{ 00:30:16.485 "params": { 00:30:16.485 "name": "Nvme10", 00:30:16.485 "trtype": "tcp", 00:30:16.485 "traddr": "10.0.0.2", 00:30:16.485 "adrfam": "ipv4", 00:30:16.485 "trsvcid": "4420", 00:30:16.485 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:16.485 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:16.485 "hdgst": false, 00:30:16.485 "ddgst": false 00:30:16.485 }, 00:30:16.485 "method": "bdev_nvme_attach_controller" 00:30:16.485 }' 00:30:16.485 [2024-07-10 23:34:25.548640] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:30:16.485 [2024-07-10 23:34:25.548730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2554045 ] 00:30:16.745 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.745 [2024-07-10 23:34:25.655083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.004 [2024-07-10 23:34:25.875137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.907 Running I/O for 1 seconds... 00:30:19.842 00:30:19.842 Latency(us) 00:30:19.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.842 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:19.842 Verification LBA range: start 0x0 length 0x400 00:30:19.842 Nvme1n1 : 1.10 237.33 14.83 0.00 0.00 262610.42 23251.03 227039.50 00:30:19.842 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:19.842 Verification LBA range: start 0x0 length 0x400 00:30:19.842 Nvme2n1 : 1.05 244.81 15.30 0.00 0.00 254382.75 18122.13 240716.58 00:30:19.842 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:19.842 Verification LBA range: start 0x0 length 0x400 00:30:19.842 Nvme3n1 : 1.17 272.64 17.04 0.00 0.00 225695.30 14588.88 244363.80 00:30:19.842 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:19.842 Verification LBA range: start 0x0 length 0x400 00:30:19.842 Nvme4n1 : 1.18 271.71 16.98 0.00 0.00 221123.05 6439.62 246187.41 00:30:19.842 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:19.842 Verification LBA range: start 0x0 length 0x400 00:30:19.842 Nvme5n1 : 1.11 229.91 14.37 0.00 0.00 258665.29 20287.67 246187.41 00:30:19.842 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:19.842 Verification LBA range: start 0x0 length 0x400 00:30:19.842 Nvme6n1 : 1.18 270.23 16.89 0.00 0.00 216518.12 9004.08 244363.80 00:30:19.842 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:19.842 Verification LBA range: start 0x0 length 0x400 00:30:19.842 Nvme7n1 : 1.11 231.07 14.44 0.00 0.00 248731.83 19261.89 227951.30 00:30:19.842 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:19.842 Verification LBA range: start 0x0 length 0x400 00:30:19.842 Nvme8n1 : 1.19 268.77 16.80 0.00 0.00 212029.44 16982.37 249834.63 00:30:19.842 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:19.842 Verification LBA range: start 0x0 length 0x400 00:30:19.842 Nvme9n1 : 1.17 219.44 13.71 0.00 0.00 254963.31 17666.23 249834.63 00:30:19.842 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:19.842 Verification LBA range: start 0x0 length 0x400 00:30:19.842 Nvme10n1 : 1.20 267.19 16.70 0.00 0.00 206791.90 14303.94 266247.12 00:30:19.842 =================================================================================================================== 00:30:19.842 Total : 2513.11 157.07 0.00 0.00 233999.81 6439.62 266247.12 00:30:21.220 23:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:30:21.220 23:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:21.220 23:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:21.220 23:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:21.220 23:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:21.220 23:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:21.220 23:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:30:21.220 23:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:21.220 23:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:30:21.220 23:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:21.220 23:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:21.220 rmmod nvme_tcp 00:30:21.220 rmmod nvme_fabrics 00:30:21.220 rmmod nvme_keyring 00:30:21.220 23:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:21.220 23:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:30:21.220 23:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:30:21.220 23:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2553083 ']' 00:30:21.220 23:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2553083 00:30:21.220 23:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2553083 ']' 00:30:21.220 23:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2553083 00:30:21.220 23:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:30:21.221 23:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:21.221 23:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2553083 00:30:21.221 23:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:21.221 23:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:21.221 23:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2553083' 00:30:21.221 killing process with pid 2553083 00:30:21.221 23:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2553083 00:30:21.221 23:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2553083 00:30:24.511 23:34:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:24.511 23:34:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:24.511 23:34:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:24.511 23:34:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:24.511 23:34:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:24.511 23:34:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.511 23:34:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:24.511 23:34:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.419 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:26.419 00:30:26.419 real 0m20.995s 00:30:26.419 user 0m58.889s 00:30:26.419 sys 0m5.742s 00:30:26.419 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:26.419 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:26.419 ************************************ 00:30:26.419 END TEST nvmf_shutdown_tc1 00:30:26.419 ************************************ 00:30:26.419 23:34:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:30:26.419 23:34:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:30:26.419 23:34:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:26.419 23:34:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:26.419 23:34:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:26.679 ************************************ 00:30:26.679 START TEST nvmf_shutdown_tc2 00:30:26.679 ************************************ 00:30:26.679 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:30:26.679 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:30:26.679 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:26.679 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:26.680 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:26.680 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:26.680 Found net devices under 0000:86:00.0: cvl_0_0 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:26.680 Found net devices under 0000:86:00.1: cvl_0_1 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:26.680 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:26.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:30:26.940 00:30:26.940 --- 10.0.0.2 ping statistics --- 00:30:26.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.940 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:30:26.940 00:30:26.940 --- 10.0.0.1 ping statistics --- 00:30:26.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.940 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2555887 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2555887 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2555887 ']' 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:26.940 23:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.940 [2024-07-10 23:34:35.921077] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:30:26.940 [2024-07-10 23:34:35.921170] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.940 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.200 [2024-07-10 23:34:36.030014] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:27.200 [2024-07-10 23:34:36.238564] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:27.200 [2024-07-10 23:34:36.238610] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:27.200 [2024-07-10 23:34:36.238621] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:27.200 [2024-07-10 23:34:36.238631] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:27.200 [2024-07-10 23:34:36.238639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:27.200 [2024-07-10 23:34:36.238762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:27.200 [2024-07-10 23:34:36.238838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:27.200 [2024-07-10 23:34:36.238919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.200 [2024-07-10 23:34:36.238943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.769 [2024-07-10 23:34:36.733816] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.769 23:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:28.030 Malloc1 00:30:28.030 [2024-07-10 23:34:36.888901] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.030 Malloc2 00:30:28.030 Malloc3 00:30:28.289 Malloc4 00:30:28.289 Malloc5 00:30:28.549 Malloc6 00:30:28.549 Malloc7 00:30:28.808 Malloc8 00:30:28.808 Malloc9 00:30:29.068 Malloc10 00:30:29.068 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.068 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:29.068 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:29.068 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.068 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2556170 00:30:29.068 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2556170 /var/tmp/bdevperf.sock 00:30:29.068 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2556170 ']' 00:30:29.068 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:29.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.069 { 00:30:29.069 "params": { 00:30:29.069 "name": "Nvme$subsystem", 00:30:29.069 "trtype": "$TEST_TRANSPORT", 00:30:29.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.069 "adrfam": "ipv4", 00:30:29.069 "trsvcid": "$NVMF_PORT", 00:30:29.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.069 "hdgst": ${hdgst:-false}, 00:30:29.069 "ddgst": ${ddgst:-false} 00:30:29.069 }, 00:30:29.069 "method": "bdev_nvme_attach_controller" 00:30:29.069 } 00:30:29.069 EOF 00:30:29.069 )") 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.069 { 00:30:29.069 "params": { 00:30:29.069 "name": "Nvme$subsystem", 00:30:29.069 "trtype": "$TEST_TRANSPORT", 00:30:29.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.069 "adrfam": "ipv4", 00:30:29.069 "trsvcid": "$NVMF_PORT", 00:30:29.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.069 "hdgst": ${hdgst:-false}, 00:30:29.069 "ddgst": ${ddgst:-false} 00:30:29.069 }, 00:30:29.069 "method": "bdev_nvme_attach_controller" 00:30:29.069 } 00:30:29.069 EOF 00:30:29.069 )") 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.069 { 00:30:29.069 "params": { 00:30:29.069 "name": "Nvme$subsystem", 00:30:29.069 "trtype": "$TEST_TRANSPORT", 00:30:29.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.069 "adrfam": "ipv4", 00:30:29.069 "trsvcid": "$NVMF_PORT", 00:30:29.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.069 "hdgst": ${hdgst:-false}, 00:30:29.069 "ddgst": ${ddgst:-false} 00:30:29.069 }, 00:30:29.069 "method": "bdev_nvme_attach_controller" 00:30:29.069 } 00:30:29.069 EOF 00:30:29.069 )") 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.069 { 00:30:29.069 "params": { 00:30:29.069 "name": "Nvme$subsystem", 00:30:29.069 "trtype": "$TEST_TRANSPORT", 00:30:29.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.069 "adrfam": "ipv4", 00:30:29.069 "trsvcid": "$NVMF_PORT", 00:30:29.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.069 "hdgst": ${hdgst:-false}, 00:30:29.069 "ddgst": ${ddgst:-false} 00:30:29.069 }, 00:30:29.069 "method": "bdev_nvme_attach_controller" 00:30:29.069 } 00:30:29.069 EOF 00:30:29.069 )") 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.069 { 00:30:29.069 "params": { 00:30:29.069 "name": "Nvme$subsystem", 00:30:29.069 "trtype": "$TEST_TRANSPORT", 00:30:29.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.069 "adrfam": "ipv4", 00:30:29.069 "trsvcid": "$NVMF_PORT", 00:30:29.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.069 "hdgst": ${hdgst:-false}, 00:30:29.069 "ddgst": ${ddgst:-false} 00:30:29.069 }, 00:30:29.069 "method": "bdev_nvme_attach_controller" 00:30:29.069 } 00:30:29.069 EOF 00:30:29.069 )") 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.069 { 00:30:29.069 "params": { 00:30:29.069 "name": "Nvme$subsystem", 00:30:29.069 "trtype": "$TEST_TRANSPORT", 00:30:29.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.069 "adrfam": "ipv4", 00:30:29.069 "trsvcid": "$NVMF_PORT", 00:30:29.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.069 "hdgst": ${hdgst:-false}, 00:30:29.069 "ddgst": ${ddgst:-false} 00:30:29.069 }, 00:30:29.069 "method": "bdev_nvme_attach_controller" 00:30:29.069 } 00:30:29.069 EOF 00:30:29.069 )") 00:30:29.069 23:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:29.069 23:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.069 23:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.069 { 00:30:29.069 "params": { 00:30:29.069 "name": "Nvme$subsystem", 00:30:29.069 "trtype": "$TEST_TRANSPORT", 00:30:29.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.069 "adrfam": "ipv4", 00:30:29.069 "trsvcid": "$NVMF_PORT", 00:30:29.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.069 "hdgst": ${hdgst:-false}, 00:30:29.069 "ddgst": ${ddgst:-false} 00:30:29.069 }, 00:30:29.069 "method": "bdev_nvme_attach_controller" 00:30:29.069 } 00:30:29.069 EOF 00:30:29.069 )") 00:30:29.069 23:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:29.069 23:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.069 23:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.069 { 00:30:29.069 "params": { 00:30:29.069 "name": "Nvme$subsystem", 00:30:29.069 "trtype": "$TEST_TRANSPORT", 00:30:29.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.069 "adrfam": "ipv4", 00:30:29.069 "trsvcid": "$NVMF_PORT", 00:30:29.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.069 "hdgst": ${hdgst:-false}, 00:30:29.069 "ddgst": ${ddgst:-false} 00:30:29.069 }, 00:30:29.069 "method": "bdev_nvme_attach_controller" 00:30:29.069 } 00:30:29.069 EOF 00:30:29.069 )") 00:30:29.069 23:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:29.069 23:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.069 23:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.069 { 00:30:29.069 "params": { 00:30:29.069 "name": "Nvme$subsystem", 00:30:29.069 "trtype": "$TEST_TRANSPORT", 00:30:29.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.069 "adrfam": "ipv4", 00:30:29.069 "trsvcid": "$NVMF_PORT", 00:30:29.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.069 "hdgst": ${hdgst:-false}, 00:30:29.069 "ddgst": ${ddgst:-false} 00:30:29.069 }, 00:30:29.069 "method": "bdev_nvme_attach_controller" 00:30:29.069 } 00:30:29.069 EOF 00:30:29.069 )") 00:30:29.069 23:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:29.069 23:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.069 23:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.069 { 00:30:29.069 "params": { 00:30:29.069 "name": "Nvme$subsystem", 00:30:29.069 "trtype": "$TEST_TRANSPORT", 00:30:29.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.069 "adrfam": "ipv4", 00:30:29.069 "trsvcid": "$NVMF_PORT", 00:30:29.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.069 "hdgst": ${hdgst:-false}, 00:30:29.069 "ddgst": ${ddgst:-false} 00:30:29.069 }, 00:30:29.069 "method": "bdev_nvme_attach_controller" 00:30:29.070 } 00:30:29.070 EOF 00:30:29.070 )") 00:30:29.070 23:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:29.070 23:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:30:29.070 [2024-07-10 23:34:38.032144] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:30:29.070 [2024-07-10 23:34:38.032240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2556170 ] 00:30:29.070 23:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:30:29.070 23:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:29.070 "params": { 00:30:29.070 "name": "Nvme1", 00:30:29.070 "trtype": "tcp", 00:30:29.070 "traddr": "10.0.0.2", 00:30:29.070 "adrfam": "ipv4", 00:30:29.070 "trsvcid": "4420", 00:30:29.070 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:29.070 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:29.070 "hdgst": false, 00:30:29.070 "ddgst": false 00:30:29.070 }, 00:30:29.070 "method": "bdev_nvme_attach_controller" 00:30:29.070 },{ 00:30:29.070 "params": { 00:30:29.070 "name": "Nvme2", 00:30:29.070 "trtype": "tcp", 00:30:29.070 "traddr": "10.0.0.2", 00:30:29.070 "adrfam": "ipv4", 00:30:29.070 "trsvcid": "4420", 00:30:29.070 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:29.070 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:29.070 "hdgst": false, 00:30:29.070 "ddgst": false 00:30:29.070 }, 00:30:29.070 "method": "bdev_nvme_attach_controller" 00:30:29.070 },{ 00:30:29.070 "params": { 00:30:29.070 "name": "Nvme3", 00:30:29.070 "trtype": "tcp", 00:30:29.070 "traddr": "10.0.0.2", 00:30:29.070 "adrfam": "ipv4", 00:30:29.070 "trsvcid": "4420", 00:30:29.070 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:29.070 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:29.070 "hdgst": false, 00:30:29.070 "ddgst": false 00:30:29.070 }, 00:30:29.070 "method": "bdev_nvme_attach_controller" 00:30:29.070 },{ 00:30:29.070 "params": { 00:30:29.070 "name": "Nvme4", 00:30:29.070 "trtype": "tcp", 00:30:29.070 "traddr": "10.0.0.2", 00:30:29.070 "adrfam": "ipv4", 00:30:29.070 "trsvcid": "4420", 00:30:29.070 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:29.070 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:29.070 "hdgst": false, 00:30:29.070 "ddgst": false 00:30:29.070 }, 00:30:29.070 "method": "bdev_nvme_attach_controller" 00:30:29.070 },{ 00:30:29.070 "params": { 00:30:29.070 "name": "Nvme5", 00:30:29.070 "trtype": "tcp", 00:30:29.070 "traddr": "10.0.0.2", 00:30:29.070 "adrfam": "ipv4", 00:30:29.070 "trsvcid": "4420", 00:30:29.070 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:29.070 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:29.070 "hdgst": false, 00:30:29.070 "ddgst": false 00:30:29.070 }, 00:30:29.070 "method": "bdev_nvme_attach_controller" 00:30:29.070 },{ 00:30:29.070 "params": { 00:30:29.070 "name": "Nvme6", 00:30:29.070 "trtype": "tcp", 00:30:29.070 "traddr": "10.0.0.2", 00:30:29.070 "adrfam": "ipv4", 00:30:29.070 "trsvcid": "4420", 00:30:29.070 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:29.070 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:29.070 "hdgst": false, 00:30:29.070 "ddgst": false 00:30:29.070 }, 00:30:29.070 "method": "bdev_nvme_attach_controller" 00:30:29.070 },{ 00:30:29.070 "params": { 00:30:29.070 "name": "Nvme7", 00:30:29.070 "trtype": "tcp", 00:30:29.070 "traddr": "10.0.0.2", 00:30:29.070 "adrfam": "ipv4", 00:30:29.070 "trsvcid": "4420", 00:30:29.070 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:29.070 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:29.070 "hdgst": false, 00:30:29.070 "ddgst": false 00:30:29.070 }, 00:30:29.070 "method": "bdev_nvme_attach_controller" 00:30:29.070 },{ 00:30:29.070 "params": { 00:30:29.070 "name": "Nvme8", 00:30:29.070 "trtype": "tcp", 00:30:29.070 "traddr": "10.0.0.2", 00:30:29.070 "adrfam": "ipv4", 00:30:29.070 "trsvcid": "4420", 00:30:29.070 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:29.070 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:29.070 "hdgst": false, 00:30:29.070 "ddgst": false 00:30:29.070 }, 00:30:29.070 "method": "bdev_nvme_attach_controller" 00:30:29.070 },{ 00:30:29.070 "params": { 00:30:29.070 "name": "Nvme9", 00:30:29.070 "trtype": "tcp", 00:30:29.070 "traddr": "10.0.0.2", 00:30:29.070 "adrfam": "ipv4", 00:30:29.070 "trsvcid": "4420", 00:30:29.070 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:29.070 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:29.070 "hdgst": false, 00:30:29.070 "ddgst": false 00:30:29.070 }, 00:30:29.070 "method": "bdev_nvme_attach_controller" 00:30:29.070 },{ 00:30:29.070 "params": { 00:30:29.070 "name": "Nvme10", 00:30:29.070 "trtype": "tcp", 00:30:29.070 "traddr": "10.0.0.2", 00:30:29.070 "adrfam": "ipv4", 00:30:29.070 "trsvcid": "4420", 00:30:29.070 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:29.070 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:29.070 "hdgst": false, 00:30:29.070 "ddgst": false 00:30:29.070 }, 00:30:29.070 "method": "bdev_nvme_attach_controller" 00:30:29.070 }' 00:30:29.070 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.330 [2024-07-10 23:34:38.138938] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.330 [2024-07-10 23:34:38.373781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.864 Running I/O for 10 seconds... 00:30:31.864 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:31.864 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:31.864 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:31.864 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.864 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.864 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.864 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:31.864 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:31.864 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:30:31.864 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:30:31.864 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:30:31.864 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:30:31.864 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=83 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 83 -ge 100 ']' 00:30:31.865 23:34:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2556170 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2556170 ']' 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2556170 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:32.124 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2556170 00:30:32.383 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:32.383 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:32.383 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2556170' 00:30:32.383 killing process with pid 2556170 00:30:32.383 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2556170 00:30:32.383 23:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2556170 00:30:32.383 Received shutdown signal, test time was about 0.969187 seconds 00:30:32.383 00:30:32.383 Latency(us) 00:30:32.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.383 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:32.383 Verification LBA range: start 0x0 length 0x400 00:30:32.383 Nvme1n1 : 0.96 267.59 16.72 0.00 0.00 236407.76 17324.30 242540.19 00:30:32.383 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:32.383 Verification LBA range: start 0x0 length 0x400 00:30:32.383 Nvme2n1 : 0.93 206.56 12.91 0.00 0.00 300562.40 24732.72 251658.24 00:30:32.383 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:32.383 Verification LBA range: start 0x0 length 0x400 00:30:32.383 Nvme3n1 : 0.96 266.41 16.65 0.00 0.00 228171.46 17096.35 238892.97 00:30:32.383 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:32.383 Verification LBA range: start 0x0 length 0x400 00:30:32.383 Nvme4n1 : 0.97 265.03 16.56 0.00 0.00 225934.47 20059.71 240716.58 00:30:32.383 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:32.383 Verification LBA range: start 0x0 length 0x400 00:30:32.383 Nvme5n1 : 0.92 208.99 13.06 0.00 0.00 279883.84 19717.79 242540.19 00:30:32.383 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:32.383 Verification LBA range: start 0x0 length 0x400 00:30:32.383 Nvme6n1 : 0.97 264.33 16.52 0.00 0.00 217742.02 17552.25 248011.02 00:30:32.383 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:32.383 Verification LBA range: start 0x0 length 0x400 00:30:32.383 Nvme7n1 : 0.95 269.13 16.82 0.00 0.00 209343.67 18350.08 231598.53 00:30:32.383 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:32.383 Verification LBA range: start 0x0 length 0x400 00:30:32.383 Nvme8n1 : 0.94 271.89 16.99 0.00 0.00 202414.75 16982.37 226127.69 00:30:32.383 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:32.383 Verification LBA range: start 0x0 length 0x400 00:30:32.383 Nvme9n1 : 0.93 205.70 12.86 0.00 0.00 262000.94 32824.99 251658.24 00:30:32.383 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:32.383 Verification LBA range: start 0x0 length 0x400 00:30:32.383 Nvme10n1 : 0.94 204.45 12.78 0.00 0.00 258317.80 21085.50 271717.95 00:30:32.383 =================================================================================================================== 00:30:32.383 Total : 2430.08 151.88 0.00 0.00 238398.65 16982.37 271717.95 00:30:33.761 23:34:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:30:34.704 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2555887 00:30:34.704 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:34.705 rmmod nvme_tcp 00:30:34.705 rmmod nvme_fabrics 00:30:34.705 rmmod nvme_keyring 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2555887 ']' 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2555887 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2555887 ']' 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2555887 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2555887 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2555887' 00:30:34.705 killing process with pid 2555887 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2555887 00:30:34.705 23:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2555887 00:30:37.995 23:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:37.995 23:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:37.995 23:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:37.995 23:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:37.995 23:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:37.995 23:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.995 23:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:37.995 23:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.531 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:40.531 00:30:40.531 real 0m13.486s 00:30:40.531 user 0m45.610s 00:30:40.531 sys 0m1.624s 00:30:40.531 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:40.531 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:40.531 ************************************ 00:30:40.531 END TEST nvmf_shutdown_tc2 00:30:40.531 ************************************ 00:30:40.531 23:34:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:30:40.531 23:34:49 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:40.531 23:34:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:40.531 23:34:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:40.532 ************************************ 00:30:40.532 START TEST nvmf_shutdown_tc3 00:30:40.532 ************************************ 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:40.532 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:40.532 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:40.532 Found net devices under 0000:86:00.0: cvl_0_0 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:40.532 Found net devices under 0000:86:00.1: cvl_0_1 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:40.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:40.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:30:40.532 00:30:40.532 --- 10.0.0.2 ping statistics --- 00:30:40.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.532 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:30:40.532 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:40.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:40.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:30:40.533 00:30:40.533 --- 10.0.0.1 ping statistics --- 00:30:40.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.533 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2558123 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2558123 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2558123 ']' 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:40.533 23:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:40.533 [2024-07-10 23:34:49.421558] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:30:40.533 [2024-07-10 23:34:49.421649] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.533 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.533 [2024-07-10 23:34:49.525479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:40.792 [2024-07-10 23:34:49.743505] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.792 [2024-07-10 23:34:49.743549] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.792 [2024-07-10 23:34:49.743560] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:40.792 [2024-07-10 23:34:49.743570] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:40.792 [2024-07-10 23:34:49.743579] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.792 [2024-07-10 23:34:49.743645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:40.792 [2024-07-10 23:34:49.743716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:40.792 [2024-07-10 23:34:49.743801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.792 [2024-07-10 23:34:49.743826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:41.388 [2024-07-10 23:34:50.236693] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.388 23:34:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:41.388 Malloc1 00:30:41.388 [2024-07-10 23:34:50.405924] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.682 Malloc2 00:30:41.682 Malloc3 00:30:41.682 Malloc4 00:30:41.941 Malloc5 00:30:41.941 Malloc6 00:30:42.198 Malloc7 00:30:42.198 Malloc8 00:30:42.456 Malloc9 00:30:42.456 Malloc10 00:30:42.456 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.456 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:42.456 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:42.456 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:42.456 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2558530 00:30:42.456 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2558530 /var/tmp/bdevperf.sock 00:30:42.456 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2558530 ']' 00:30:42.456 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:42.456 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:42.456 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:42.456 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:42.456 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:42.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:42.456 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:42.456 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:30:42.456 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.457 { 00:30:42.457 "params": { 00:30:42.457 "name": "Nvme$subsystem", 00:30:42.457 "trtype": "$TEST_TRANSPORT", 00:30:42.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.457 "adrfam": "ipv4", 00:30:42.457 "trsvcid": "$NVMF_PORT", 00:30:42.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.457 "hdgst": ${hdgst:-false}, 00:30:42.457 "ddgst": ${ddgst:-false} 00:30:42.457 }, 00:30:42.457 "method": "bdev_nvme_attach_controller" 00:30:42.457 } 00:30:42.457 EOF 00:30:42.457 )") 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.457 { 00:30:42.457 "params": { 00:30:42.457 "name": "Nvme$subsystem", 00:30:42.457 "trtype": "$TEST_TRANSPORT", 00:30:42.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.457 "adrfam": "ipv4", 00:30:42.457 "trsvcid": "$NVMF_PORT", 00:30:42.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.457 "hdgst": ${hdgst:-false}, 00:30:42.457 "ddgst": ${ddgst:-false} 00:30:42.457 }, 00:30:42.457 "method": "bdev_nvme_attach_controller" 00:30:42.457 } 00:30:42.457 EOF 00:30:42.457 )") 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.457 { 00:30:42.457 "params": { 00:30:42.457 "name": "Nvme$subsystem", 00:30:42.457 "trtype": "$TEST_TRANSPORT", 00:30:42.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.457 "adrfam": "ipv4", 00:30:42.457 "trsvcid": "$NVMF_PORT", 00:30:42.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.457 "hdgst": ${hdgst:-false}, 00:30:42.457 "ddgst": ${ddgst:-false} 00:30:42.457 }, 00:30:42.457 "method": "bdev_nvme_attach_controller" 00:30:42.457 } 00:30:42.457 EOF 00:30:42.457 )") 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.457 { 00:30:42.457 "params": { 00:30:42.457 "name": "Nvme$subsystem", 00:30:42.457 "trtype": "$TEST_TRANSPORT", 00:30:42.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.457 "adrfam": "ipv4", 00:30:42.457 "trsvcid": "$NVMF_PORT", 00:30:42.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.457 "hdgst": ${hdgst:-false}, 00:30:42.457 "ddgst": ${ddgst:-false} 00:30:42.457 }, 00:30:42.457 "method": "bdev_nvme_attach_controller" 00:30:42.457 } 00:30:42.457 EOF 00:30:42.457 )") 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.457 { 00:30:42.457 "params": { 00:30:42.457 "name": "Nvme$subsystem", 00:30:42.457 "trtype": "$TEST_TRANSPORT", 00:30:42.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.457 "adrfam": "ipv4", 00:30:42.457 "trsvcid": "$NVMF_PORT", 00:30:42.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.457 "hdgst": ${hdgst:-false}, 00:30:42.457 "ddgst": ${ddgst:-false} 00:30:42.457 }, 00:30:42.457 "method": "bdev_nvme_attach_controller" 00:30:42.457 } 00:30:42.457 EOF 00:30:42.457 )") 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.457 { 00:30:42.457 "params": { 00:30:42.457 "name": "Nvme$subsystem", 00:30:42.457 "trtype": "$TEST_TRANSPORT", 00:30:42.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.457 "adrfam": "ipv4", 00:30:42.457 "trsvcid": "$NVMF_PORT", 00:30:42.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.457 "hdgst": ${hdgst:-false}, 00:30:42.457 "ddgst": ${ddgst:-false} 00:30:42.457 }, 00:30:42.457 "method": "bdev_nvme_attach_controller" 00:30:42.457 } 00:30:42.457 EOF 00:30:42.457 )") 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.457 { 00:30:42.457 "params": { 00:30:42.457 "name": "Nvme$subsystem", 00:30:42.457 "trtype": "$TEST_TRANSPORT", 00:30:42.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.457 "adrfam": "ipv4", 00:30:42.457 "trsvcid": "$NVMF_PORT", 00:30:42.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.457 "hdgst": ${hdgst:-false}, 00:30:42.457 "ddgst": ${ddgst:-false} 00:30:42.457 }, 00:30:42.457 "method": "bdev_nvme_attach_controller" 00:30:42.457 } 00:30:42.457 EOF 00:30:42.457 )") 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.457 { 00:30:42.457 "params": { 00:30:42.457 "name": "Nvme$subsystem", 00:30:42.457 "trtype": "$TEST_TRANSPORT", 00:30:42.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.457 "adrfam": "ipv4", 00:30:42.457 "trsvcid": "$NVMF_PORT", 00:30:42.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.457 "hdgst": ${hdgst:-false}, 00:30:42.457 "ddgst": ${ddgst:-false} 00:30:42.457 }, 00:30:42.457 "method": "bdev_nvme_attach_controller" 00:30:42.457 } 00:30:42.457 EOF 00:30:42.457 )") 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.457 { 00:30:42.457 "params": { 00:30:42.457 "name": "Nvme$subsystem", 00:30:42.457 "trtype": "$TEST_TRANSPORT", 00:30:42.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.457 "adrfam": "ipv4", 00:30:42.457 "trsvcid": "$NVMF_PORT", 00:30:42.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.457 "hdgst": ${hdgst:-false}, 00:30:42.457 "ddgst": ${ddgst:-false} 00:30:42.457 }, 00:30:42.457 "method": "bdev_nvme_attach_controller" 00:30:42.457 } 00:30:42.457 EOF 00:30:42.457 )") 00:30:42.457 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:42.716 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.716 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.716 { 00:30:42.716 "params": { 00:30:42.716 "name": "Nvme$subsystem", 00:30:42.716 "trtype": "$TEST_TRANSPORT", 00:30:42.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.716 "adrfam": "ipv4", 00:30:42.716 "trsvcid": "$NVMF_PORT", 00:30:42.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.716 "hdgst": ${hdgst:-false}, 00:30:42.716 "ddgst": ${ddgst:-false} 00:30:42.716 }, 00:30:42.716 "method": "bdev_nvme_attach_controller" 00:30:42.716 } 00:30:42.716 EOF 00:30:42.716 )") 00:30:42.716 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:42.716 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:30:42.716 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:30:42.716 [2024-07-10 23:34:51.534099] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:30:42.716 [2024-07-10 23:34:51.534202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2558530 ] 00:30:42.716 23:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:42.716 "params": { 00:30:42.716 "name": "Nvme1", 00:30:42.716 "trtype": "tcp", 00:30:42.716 "traddr": "10.0.0.2", 00:30:42.716 "adrfam": "ipv4", 00:30:42.716 "trsvcid": "4420", 00:30:42.716 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.716 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.716 "hdgst": false, 00:30:42.716 "ddgst": false 00:30:42.716 }, 00:30:42.716 "method": "bdev_nvme_attach_controller" 00:30:42.716 },{ 00:30:42.716 "params": { 00:30:42.716 "name": "Nvme2", 00:30:42.716 "trtype": "tcp", 00:30:42.716 "traddr": "10.0.0.2", 00:30:42.716 "adrfam": "ipv4", 00:30:42.716 "trsvcid": "4420", 00:30:42.716 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:42.716 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:42.716 "hdgst": false, 00:30:42.716 "ddgst": false 00:30:42.716 }, 00:30:42.716 "method": "bdev_nvme_attach_controller" 00:30:42.716 },{ 00:30:42.716 "params": { 00:30:42.716 "name": "Nvme3", 00:30:42.716 "trtype": "tcp", 00:30:42.716 "traddr": "10.0.0.2", 00:30:42.716 "adrfam": "ipv4", 00:30:42.716 "trsvcid": "4420", 00:30:42.716 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:42.716 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:42.716 "hdgst": false, 00:30:42.716 "ddgst": false 00:30:42.716 }, 00:30:42.716 "method": "bdev_nvme_attach_controller" 00:30:42.716 },{ 00:30:42.716 "params": { 00:30:42.716 "name": "Nvme4", 00:30:42.716 "trtype": "tcp", 00:30:42.716 "traddr": "10.0.0.2", 00:30:42.716 "adrfam": "ipv4", 00:30:42.716 "trsvcid": "4420", 00:30:42.716 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:42.716 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:42.716 "hdgst": false, 00:30:42.716 "ddgst": false 00:30:42.716 }, 00:30:42.716 "method": "bdev_nvme_attach_controller" 00:30:42.716 },{ 00:30:42.716 "params": { 00:30:42.716 "name": "Nvme5", 00:30:42.716 "trtype": "tcp", 00:30:42.716 "traddr": "10.0.0.2", 00:30:42.716 "adrfam": "ipv4", 00:30:42.716 "trsvcid": "4420", 00:30:42.716 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:42.716 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:42.716 "hdgst": false, 00:30:42.716 "ddgst": false 00:30:42.716 }, 00:30:42.716 "method": "bdev_nvme_attach_controller" 00:30:42.716 },{ 00:30:42.716 "params": { 00:30:42.716 "name": "Nvme6", 00:30:42.716 "trtype": "tcp", 00:30:42.716 "traddr": "10.0.0.2", 00:30:42.716 "adrfam": "ipv4", 00:30:42.716 "trsvcid": "4420", 00:30:42.716 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:42.716 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:42.716 "hdgst": false, 00:30:42.716 "ddgst": false 00:30:42.716 }, 00:30:42.716 "method": "bdev_nvme_attach_controller" 00:30:42.716 },{ 00:30:42.716 "params": { 00:30:42.716 "name": "Nvme7", 00:30:42.716 "trtype": "tcp", 00:30:42.716 "traddr": "10.0.0.2", 00:30:42.716 "adrfam": "ipv4", 00:30:42.716 "trsvcid": "4420", 00:30:42.716 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:42.716 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:42.716 "hdgst": false, 00:30:42.716 "ddgst": false 00:30:42.716 }, 00:30:42.716 "method": "bdev_nvme_attach_controller" 00:30:42.716 },{ 00:30:42.716 "params": { 00:30:42.716 "name": "Nvme8", 00:30:42.716 "trtype": "tcp", 00:30:42.716 "traddr": "10.0.0.2", 00:30:42.716 "adrfam": "ipv4", 00:30:42.716 "trsvcid": "4420", 00:30:42.716 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:42.716 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:42.716 "hdgst": false, 00:30:42.716 "ddgst": false 00:30:42.716 }, 00:30:42.716 "method": "bdev_nvme_attach_controller" 00:30:42.716 },{ 00:30:42.716 "params": { 00:30:42.716 "name": "Nvme9", 00:30:42.716 "trtype": "tcp", 00:30:42.716 "traddr": "10.0.0.2", 00:30:42.716 "adrfam": "ipv4", 00:30:42.716 "trsvcid": "4420", 00:30:42.716 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:42.716 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:42.716 "hdgst": false, 00:30:42.716 "ddgst": false 00:30:42.716 }, 00:30:42.716 "method": "bdev_nvme_attach_controller" 00:30:42.716 },{ 00:30:42.716 "params": { 00:30:42.716 "name": "Nvme10", 00:30:42.716 "trtype": "tcp", 00:30:42.716 "traddr": "10.0.0.2", 00:30:42.716 "adrfam": "ipv4", 00:30:42.716 "trsvcid": "4420", 00:30:42.716 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:42.716 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:42.716 "hdgst": false, 00:30:42.716 "ddgst": false 00:30:42.716 }, 00:30:42.716 "method": "bdev_nvme_attach_controller" 00:30:42.716 }' 00:30:42.716 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.716 [2024-07-10 23:34:51.639221] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.975 [2024-07-10 23:34:51.872540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.878 Running I/O for 10 seconds... 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=12 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 12 -ge 100 ']' 00:30:45.136 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2558123 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2558123 ']' 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2558123 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2558123 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2558123' 00:30:45.403 killing process with pid 2558123 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2558123 00:30:45.403 23:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2558123 00:30:45.403 [2024-07-10 23:34:54.426113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.403 [2024-07-10 23:34:54.426550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.426714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.429897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.432323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.432347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.432356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.432365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.432374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.432382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:45.404 [2024-07-10 23:34:54.432390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.432399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.432408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.435995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.405 [2024-07-10 23:34:54.436766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.405 [2024-07-10 23:34:54.436783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.405 [2024-07-10 23:34:54.436794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.405 [2024-07-10 23:34:54.436805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.405 [2024-07-10 23:34:54.436815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.405 [2024-07-10 23:34:54.436830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.405 [2024-07-10 23:34:54.436840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.405 [2024-07-10 23:34:54.436852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032da00 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.436905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.405 [2024-07-10 23:34:54.436918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.405 [2024-07-10 23:34:54.436930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.405 [2024-07-10 23:34:54.436939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.405 [2024-07-10 23:34:54.436950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.405 [2024-07-10 23:34:54.436959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.405 [2024-07-10 23:34:54.436969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.405 [2024-07-10 23:34:54.436979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.405 [2024-07-10 23:34:54.436987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000331600 is same with the state(5) to be set 00:30:45.405 [2024-07-10 23:34:54.437024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.405 [2024-07-10 23:34:54.437036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.405 [2024-07-10 23:34:54.437046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.405 [2024-07-10 23:34:54.437055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.405 [2024-07-10 23:34:54.437065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.406 [2024-07-10 23:34:54.437075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.437086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.406 [2024-07-10 23:34:54.437095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.437104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e180 is same with the state(5) to be set 00:30:45.406 [2024-07-10 23:34:54.437189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.406 [2024-07-10 23:34:54.437203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.437221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.406 [2024-07-10 23:34:54.437232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.437245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.406 [2024-07-10 23:34:54.437256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.437267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.406 [2024-07-10 23:34:54.437276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.437286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:30:45.406 [2024-07-10 23:34:54.438765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:45.406 [2024-07-10 23:34:54.438796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:45.406 [2024-07-10 23:34:54.438806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:45.406 [2024-07-10 23:34:54.438817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:45.406 [2024-07-10 23:34:54.439338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.439981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.439991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.440002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.440012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.440024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.440023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.406 [2024-07-10 23:34:54.440036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.440047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.406 [2024-07-10 23:34:54.440048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.406 [2024-07-10 23:34:54.440061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same [2024-07-10 23:34:54.440061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:45.406 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.406 [2024-07-10 23:34:54.440074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.407 [2024-07-10 23:34:54.440085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.407 [2024-07-10 23:34:54.440095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.407 [2024-07-10 23:34:54.440104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.407 [2024-07-10 23:34:54.440114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.407 [2024-07-10 23:34:54.440133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.407 [2024-07-10 23:34:54.440142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.407 [2024-07-10 23:34:54.440152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.407 [2024-07-10 23:34:54.440167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.407 [2024-07-10 23:34:54.440185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.407 [2024-07-10 23:34:54.440194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.407 [2024-07-10 23:34:54.440203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.407 [2024-07-10 23:34:54.440216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same [2024-07-10 23:34:54.440225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:12with the state(5) to be set 00:30:45.407 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.407 [2024-07-10 23:34:54.440236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.407 [2024-07-10 23:34:54.440245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.407 [2024-07-10 23:34:54.440254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 23:34:54.440264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.407 with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.407 [2024-07-10 23:34:54.440283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.407 [2024-07-10 23:34:54.440294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.407 [2024-07-10 23:34:54.440303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 23:34:54.440313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.407 with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.407 [2024-07-10 23:34:54.440332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.407 [2024-07-10 23:34:54.440341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:12[2024-07-10 23:34:54.440351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.407 with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same [2024-07-10 23:34:54.440363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:45.407 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.407 [2024-07-10 23:34:54.440377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.407 [2024-07-10 23:34:54.440381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.407 [2024-07-10 23:34:54.440390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.407 [2024-07-10 23:34:54.440402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.407 [2024-07-10 23:34:54.440411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.407 [2024-07-10 23:34:54.440429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.407 [2024-07-10 23:34:54.440439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.407 [2024-07-10 23:34:54.440448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same [2024-07-10 23:34:54.440448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:12with the state(5) to be set 00:30:45.408 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.440458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.440467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.440476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.440485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same [2024-07-10 23:34:54.440496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:12with the state(5) to be set 00:30:45.408 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.440507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.440517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.440526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.440535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.440553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.440563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.440572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.440581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.440599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.440608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.440616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.440625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same [2024-07-10 23:34:54.440637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:12with the state(5) to be set 00:30:45.408 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.440648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:45.408 [2024-07-10 23:34:54.440650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.440663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.440672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.440684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.440657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same [2024-07-10 23:34:54.440694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:45.408 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.440709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.440718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.440729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.440740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.440751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.440771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.440783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.440793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.440806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.440818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.440830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.440840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.440879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.408 [2024-07-10 23:34:54.441142] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000333b80 was disconnected and freed. reset controller. 00:30:45.408 [2024-07-10 23:34:54.441598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.441623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.441645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.441657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.441670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.441679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.441692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.441704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.441717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.441726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.441738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.441748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.441760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.441769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.441781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.441791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.441803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.441812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.441824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.441834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.441845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.441854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.441865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.441875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.441887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.441896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.441908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.408 [2024-07-10 23:34:54.441921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.408 [2024-07-10 23:34:54.441932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.441941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.441952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.441963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.441975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.441984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.441995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.409 [2024-07-10 23:34:54.442838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.409 [2024-07-10 23:34:54.442848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.410 [2024-07-10 23:34:54.442860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.410 [2024-07-10 23:34:54.442869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.410 [2024-07-10 23:34:54.442880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.410 [2024-07-10 23:34:54.442889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.410 [2024-07-10 23:34:54.442901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.410 [2024-07-10 23:34:54.442920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.410 [2024-07-10 23:34:54.442931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.410 [2024-07-10 23:34:54.442943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.410 [2024-07-10 23:34:54.442955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.410 [2024-07-10 23:34:54.442964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.410 [2024-07-10 23:34:54.442976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.410 [2024-07-10 23:34:54.442986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.410 [2024-07-10 23:34:54.442998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.410 [2024-07-10 23:34:54.443009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.410 [2024-07-10 23:34:54.443245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443324] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000333680 was disconnected and freed. reset controller. 00:30:45.410 [2024-07-10 23:34:54.443333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.443815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.444704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:45.410 [2024-07-10 23:34:54.444772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e900 (9): Bad file descriptor 00:30:45.410 [2024-07-10 23:34:54.446166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:45.410 [2024-07-10 23:34:54.446199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032da00 (9): Bad file descriptor 00:30:45.410 [2024-07-10 23:34:54.447065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.447070] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:45.410 [2024-07-10 23:34:54.447090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.447100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.447109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.447118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.447127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.410 [2024-07-10 23:34:54.447136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.411 [2024-07-10 23:34:54.447266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032e90[2024-07-10 23:34:54.447286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same 0 with addr=10.0.0.2, port=4420 00:30:45.411 with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e900 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-10 23:34:54.447330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000331600 with the state(5) to be set 00:30:45.411 (9): Bad file descriptor 00:30:45.411 [2024-07-10 23:34:54.447344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e180 [2024-07-10 23:34:54.447363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same (9): Bad file descriptor 00:30:45.411 with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-10 23:34:54.447428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same id:0 cdw10:00000000 cdw11:00000000 00:30:45.411 with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.411 [2024-07-10 23:34:54.447449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.411 [2024-07-10 23:34:54.447458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.411 [2024-07-10 23:34:54.447468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.411 [2024-07-10 23:34:54.447487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.411 [2024-07-10 23:34:54.447496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.411 [2024-07-10 23:34:54.447506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.411 [2024-07-10 23:34:54.447513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.411 [2024-07-10 23:34:54.447515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-10 23:34:54.447524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032f080 is with the state(5) to be set 00:30:45.412 same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.412 [2024-07-10 23:34:54.447573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.412 [2024-07-10 23:34:54.447582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-10 23:34:54.447591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same id:0 cdw10:00000000 cdw11:00000000 00:30:45.412 with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.412 [2024-07-10 23:34:54.447613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.412 [2024-07-10 23:34:54.447623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.412 [2024-07-10 23:34:54.447632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-10 23:34:54.447641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same id:0 cdw10:00000000 cdw11:00000000 00:30:45.412 with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.412 [2024-07-10 23:34:54.447661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032f800 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.412 [2024-07-10 23:34:54.447711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.412 [2024-07-10 23:34:54.447726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.412 [2024-07-10 23:34:54.447736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.412 [2024-07-10 23:34:54.447747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.412 [2024-07-10 23:34:54.447756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.412 [2024-07-10 23:34:54.447766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.412 [2024-07-10 23:34:54.447776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.412 [2024-07-10 23:34:54.447785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032ff80 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.447804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:30:45.412 [2024-07-10 23:34:54.447869] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:45.412 [2024-07-10 23:34:54.448341] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:45.412 [2024-07-10 23:34:54.448815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.412 [2024-07-10 23:34:54.448841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032da00 with addr=10.0.0.2, port=4420 00:30:45.412 [2024-07-10 23:34:54.448852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032da00 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.448868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e900 (9): Bad file descriptor 00:30:45.412 [2024-07-10 23:34:54.448977] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:45.412 [2024-07-10 23:34:54.449033] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:45.412 [2024-07-10 23:34:54.449227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032da00 (9): Bad file descriptor 00:30:45.412 [2024-07-10 23:34:54.449248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:45.412 [2024-07-10 23:34:54.449259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:45.412 [2024-07-10 23:34:54.449272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:45.412 [2024-07-10 23:34:54.449501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.412 [2024-07-10 23:34:54.449521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:45.412 [2024-07-10 23:34:54.449530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:45.412 [2024-07-10 23:34:54.449540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:45.412 [2024-07-10 23:34:54.449732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.412 [2024-07-10 23:34:54.449810] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:45.412 [2024-07-10 23:34:54.449834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.449858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.449867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.449880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.412 [2024-07-10 23:34:54.449889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.449897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.449906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.449915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.449924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.449933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.449941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.449949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.449958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.449966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.449975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.449983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.449992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.450408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:45.413 [2024-07-10 23:34:54.451198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.413 [2024-07-10 23:34:54.451224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.413 [2024-07-10 23:34:54.451244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.413 [2024-07-10 23:34:54.451255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.413 [2024-07-10 23:34:54.451269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.413 [2024-07-10 23:34:54.451279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.413 [2024-07-10 23:34:54.451291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.413 [2024-07-10 23:34:54.451301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.413 [2024-07-10 23:34:54.451313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.413 [2024-07-10 23:34:54.451323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.413 [2024-07-10 23:34:54.451335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.413 [2024-07-10 23:34:54.451345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.413 [2024-07-10 23:34:54.451357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.413 [2024-07-10 23:34:54.451367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.413 [2024-07-10 23:34:54.451379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.413 [2024-07-10 23:34:54.451388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.413 [2024-07-10 23:34:54.451400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.413 [2024-07-10 23:34:54.451410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.413 [2024-07-10 23:34:54.451425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.413 [2024-07-10 23:34:54.451435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.413 [2024-07-10 23:34:54.451447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.413 [2024-07-10 23:34:54.451457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.413 [2024-07-10 23:34:54.451469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.413 [2024-07-10 23:34:54.451479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.413 [2024-07-10 23:34:54.451490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.413 [2024-07-10 23:34:54.451502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.413 [2024-07-10 23:34:54.451514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.413 [2024-07-10 23:34:54.451524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-10 23:34:54.451905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:12with the state(5) to be set 00:30:45.414 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-10 23:34:54.451961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:12with the state(5) to be set 00:30:45.414 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.451983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.451990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.451992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.452001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 23:34:54.452002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.452013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.452016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.452022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.452028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.452035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.452042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.452044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.452052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 23:34:54.452053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.452066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.452068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.452075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.452080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.452085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.452093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.452097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.452105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 23:34:54.452106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.452117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.452120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.414 [2024-07-10 23:34:54.452126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.452132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.414 [2024-07-10 23:34:54.452136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.414 [2024-07-10 23:34:54.452145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-10 23:34:54.452145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:12with the state(5) to be set 00:30:45.415 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-10 23:34:54.452166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:45.415 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.452181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.452194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.452206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.452208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.452220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.452232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:12[2024-07-10 23:34:54.452233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.452245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.452247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.452260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.452272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-10 23:34:54.452272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:45.415 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.452287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.452298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.452309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.452313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.452324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.415 [2024-07-10 23:34:54.452742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.415 [2024-07-10 23:34:54.452753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000334580 is same with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.453038] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000334580 was disconnected and freed. reset controller. 00:30:45.415 [2024-07-10 23:34:54.454208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:45.415 [2024-07-10 23:34:54.454266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000330700 (9): Bad file descriptor 00:30:45.415 [2024-07-10 23:34:54.454984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.415 [2024-07-10 23:34:54.455010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000330700 with addr=10.0.0.2, port=4420 00:30:45.415 [2024-07-10 23:34:54.455022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000330700 is same with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.455146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000330700 (9): Bad file descriptor 00:30:45.415 [2024-07-10 23:34:54.455249] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:45.415 [2024-07-10 23:34:54.455272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:45.415 [2024-07-10 23:34:54.455282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:45.415 [2024-07-10 23:34:54.455293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:45.415 [2024-07-10 23:34:54.455358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.415 [2024-07-10 23:34:54.456336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:45.415 [2024-07-10 23:34:54.456585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.415 [2024-07-10 23:34:54.456606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032e900 with addr=10.0.0.2, port=4420 00:30:45.415 [2024-07-10 23:34:54.456617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e900 is same with the state(5) to be set 00:30:45.415 [2024-07-10 23:34:54.456667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e900 (9): Bad file descriptor 00:30:45.415 [2024-07-10 23:34:54.456717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:45.415 [2024-07-10 23:34:54.456727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:45.415 [2024-07-10 23:34:54.456736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:45.416 [2024-07-10 23:34:54.456788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.416 [2024-07-10 23:34:54.457168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.416 [2024-07-10 23:34:54.457185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.416 [2024-07-10 23:34:54.457207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.416 [2024-07-10 23:34:54.457229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.416 [2024-07-10 23:34:54.457250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000330e80 is same with the state(5) to be set 00:30:45.416 [2024-07-10 23:34:54.457285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032f080 (9): Bad file descriptor 00:30:45.416 [2024-07-10 23:34:54.457306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032f800 (9): Bad file descriptor 00:30:45.416 [2024-07-10 23:34:54.457328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ff80 (9): Bad file descriptor 00:30:45.416 [2024-07-10 23:34:54.457468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.457989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.457999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.458010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.458020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.458032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.458041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.458053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.458063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.458074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.458084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.458095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.458108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.458120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.458130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.416 [2024-07-10 23:34:54.458142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.416 [2024-07-10 23:34:54.458153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.417 [2024-07-10 23:34:54.458686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.417 [2024-07-10 23:34:54.458697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.683 [2024-07-10 23:34:54.465598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.683 [2024-07-10 23:34:54.465618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.683 [2024-07-10 23:34:54.465630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.683 [2024-07-10 23:34:54.465642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.683 [2024-07-10 23:34:54.465652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.683 [2024-07-10 23:34:54.465664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.683 [2024-07-10 23:34:54.465675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.683 [2024-07-10 23:34:54.465687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.683 [2024-07-10 23:34:54.465697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.683 [2024-07-10 23:34:54.465709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.683 [2024-07-10 23:34:54.465719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.683 [2024-07-10 23:34:54.465731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.683 [2024-07-10 23:34:54.465741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.465753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.465765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.465777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.465787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.465799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000333400 is same with the state(5) to be set 00:30:45.684 [2024-07-10 23:34:54.467158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.467972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.467988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.468004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.468018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.468034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.468047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.468064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.468078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.468095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.468110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.468126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.468139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.468156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.468175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.468191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.468205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.468223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.468237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.468254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.468268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.468285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.468298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.468314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.468328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.468344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.468357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.684 [2024-07-10 23:34:54.468376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.684 [2024-07-10 23:34:54.468391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.468972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.468987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.469002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.469018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.469031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.469047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.469061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.469077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.469090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.469106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.469120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.469135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000333900 is same with the state(5) to be set 00:30:45.685 [2024-07-10 23:34:54.470963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.470992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.471013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.471028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.471045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.471059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.471077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.471091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.471109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.471123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.471141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.471156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.471181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.471195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.471212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.471227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.471243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.471257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.471274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.471288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.471305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.471319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.471335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.471349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.471364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.471382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.471415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.471429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.471446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.471460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.471477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.471491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.471508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.471522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.685 [2024-07-10 23:34:54.471539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.685 [2024-07-10 23:34:54.471553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.471570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.471583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.471599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.471613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.471629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.471642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.471659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.471673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.471689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.471703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.471720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.471734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.471751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.471766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.471784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.471798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.471813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.471828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.471843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.471858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.471874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.471888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.471906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.471919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.471936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.471951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.471967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.471981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.471998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.686 [2024-07-10 23:34:54.472861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.686 [2024-07-10 23:34:54.472874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.472890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.472903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.472919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.472935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.472951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.472966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.472980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000334a80 is same with the state(5) to be set 00:30:45.687 [2024-07-10 23:34:54.479082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.687 [2024-07-10 23:34:54.479112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:45.687 [2024-07-10 23:34:54.479125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:45.687 [2024-07-10 23:34:54.479214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000330e80 (9): Bad file descriptor 00:30:45.687 [2024-07-10 23:34:54.479260] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:45.687 [2024-07-10 23:34:54.479352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:45.687 [2024-07-10 23:34:54.479618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.687 [2024-07-10 23:34:54.479640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:30:45.687 [2024-07-10 23:34:54.479652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:30:45.687 [2024-07-10 23:34:54.479828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.687 [2024-07-10 23:34:54.479844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032e180 with addr=10.0.0.2, port=4420 00:30:45.687 [2024-07-10 23:34:54.479855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e180 is same with the state(5) to be set 00:30:45.687 [2024-07-10 23:34:54.480124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.687 [2024-07-10 23:34:54.480143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000331600 with addr=10.0.0.2, port=4420 00:30:45.687 [2024-07-10 23:34:54.480153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000331600 is same with the state(5) to be set 00:30:45.687 [2024-07-10 23:34:54.480975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.480998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.687 [2024-07-10 23:34:54.481624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.687 [2024-07-10 23:34:54.481634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.481646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.481656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.481670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.481680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.481693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.481703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.481715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.481725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.481738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.481749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.481761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.481772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.481785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.481795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.481808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.481818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.481830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.481839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.481851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.481861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.481873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.481883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.481896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.481907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.481918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.481928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.481940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.481952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.481964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.481974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.481985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.481995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.482432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.482443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000333e00 is same with the state(5) to be set 00:30:45.688 [2024-07-10 23:34:54.483741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.483761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.483777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.483788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.688 [2024-07-10 23:34:54.483801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.688 [2024-07-10 23:34:54.483814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.483827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.483838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.483849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.483869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.483882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.483893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.483905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.483916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.483927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.483938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.483950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.483960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.483972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.483983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.483994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.689 [2024-07-10 23:34:54.484778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.689 [2024-07-10 23:34:54.484789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.484801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.484810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.484823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.484834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.484847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.484856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.484869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.484880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.484893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.484903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.484916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.484926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.484938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.484950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.484962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.484972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.484984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.484994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.485006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.485015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.485027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.485036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.485048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.485058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.485070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.485080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.485092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.485102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.485115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.485125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.485137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.485147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.485164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.485174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.485186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.485196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.485207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000334080 is same with the state(5) to be set 00:30:45.690 [2024-07-10 23:34:54.486493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.486985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.486997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.487008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.487020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.487031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.690 [2024-07-10 23:34:54.487042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.690 [2024-07-10 23:34:54.487052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.691 [2024-07-10 23:34:54.487929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.691 [2024-07-10 23:34:54.487939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000334300 is same with the state(5) to be set 00:30:45.691 [2024-07-10 23:34:54.489578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:45.691 [2024-07-10 23:34:54.489603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:45.691 [2024-07-10 23:34:54.489615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:45.691 [2024-07-10 23:34:54.489628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:45.691 [2024-07-10 23:34:54.489641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:45.691 [2024-07-10 23:34:54.489866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.692 [2024-07-10 23:34:54.489887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032da00 with addr=10.0.0.2, port=4420 00:30:45.692 [2024-07-10 23:34:54.489904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032da00 is same with the state(5) to be set 00:30:45.692 [2024-07-10 23:34:54.489919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:30:45.692 [2024-07-10 23:34:54.489934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e180 (9): Bad file descriptor 00:30:45.692 [2024-07-10 23:34:54.489949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000331600 (9): Bad file descriptor 00:30:45.692 [2024-07-10 23:34:54.489983] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:45.692 [2024-07-10 23:34:54.490001] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:45.692 [2024-07-10 23:34:54.490016] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:45.692 [2024-07-10 23:34:54.490028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032da00 (9): Bad file descriptor 00:30:45.692 [2024-07-10 23:34:54.490288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.692 [2024-07-10 23:34:54.490309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000330700 with addr=10.0.0.2, port=4420 00:30:45.692 [2024-07-10 23:34:54.490320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000330700 is same with the state(5) to be set 00:30:45.692 [2024-07-10 23:34:54.490499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.692 [2024-07-10 23:34:54.490514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032e900 with addr=10.0.0.2, port=4420 00:30:45.692 [2024-07-10 23:34:54.490525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e900 is same with the state(5) to be set 00:30:45.692 [2024-07-10 23:34:54.490710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.692 [2024-07-10 23:34:54.490725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032f080 with addr=10.0.0.2, port=4420 00:30:45.692 [2024-07-10 23:34:54.490736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032f080 is same with the state(5) to be set 00:30:45.692 [2024-07-10 23:34:54.490865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.692 [2024-07-10 23:34:54.490880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032f800 with addr=10.0.0.2, port=4420 00:30:45.692 [2024-07-10 23:34:54.490894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032f800 is same with the state(5) to be set 00:30:45.692 [2024-07-10 23:34:54.491023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.692 [2024-07-10 23:34:54.491038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:30:45.692 [2024-07-10 23:34:54.491049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032ff80 is same with the state(5) to be set 00:30:45.692 [2024-07-10 23:34:54.491061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.692 [2024-07-10 23:34:54.491072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.692 [2024-07-10 23:34:54.491084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.692 [2024-07-10 23:34:54.491101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:45.692 [2024-07-10 23:34:54.491110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:45.692 [2024-07-10 23:34:54.491119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:45.692 [2024-07-10 23:34:54.491133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:45.692 [2024-07-10 23:34:54.491143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:45.692 [2024-07-10 23:34:54.491151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:45.692 [2024-07-10 23:34:54.492327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.692 [2024-07-10 23:34:54.492840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.692 [2024-07-10 23:34:54.492850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.492862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.492873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.492884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.492895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.492907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.492917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.492930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.492940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.492953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.492962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.492974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.492985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.492998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.693 [2024-07-10 23:34:54.493650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.693 [2024-07-10 23:34:54.493663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000334800 is same with the state(5) to be set 00:30:45.693 [2024-07-10 23:34:54.498594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.693 [2024-07-10 23:34:54.498617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.693 [2024-07-10 23:34:54.498625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.693 task offset: 21248 on job bdev=Nvme4n1 fails 00:30:45.693 00:30:45.693 Latency(us) 00:30:45.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:45.693 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.693 Job: Nvme1n1 ended in about 0.70 seconds with error 00:30:45.693 Verification LBA range: start 0x0 length 0x400 00:30:45.693 Nvme1n1 : 0.70 183.24 11.45 91.62 0.00 229715.85 25872.47 212450.62 00:30:45.693 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.693 Job: Nvme2n1 ended in about 0.68 seconds with error 00:30:45.693 Verification LBA range: start 0x0 length 0x400 00:30:45.693 Nvme2n1 : 0.68 188.92 11.81 94.46 0.00 217025.15 6097.70 251658.24 00:30:45.693 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.693 Job: Nvme3n1 ended in about 0.70 seconds with error 00:30:45.693 Verification LBA range: start 0x0 length 0x400 00:30:45.693 Nvme3n1 : 0.70 182.35 11.40 91.17 0.00 219439.34 15842.62 246187.41 00:30:45.693 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.693 Job: Nvme4n1 ended in about 0.68 seconds with error 00:30:45.693 Verification LBA range: start 0x0 length 0x400 00:30:45.693 Nvme4n1 : 0.68 189.31 11.83 94.65 0.00 205137.85 4388.06 219745.06 00:30:45.693 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.693 Job: Nvme5n1 ended in about 0.72 seconds with error 00:30:45.693 Verification LBA range: start 0x0 length 0x400 00:30:45.694 Nvme5n1 : 0.72 89.50 5.59 89.50 0.00 318749.83 19831.76 286306.84 00:30:45.694 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.694 Job: Nvme6n1 ended in about 0.72 seconds with error 00:30:45.694 Verification LBA range: start 0x0 length 0x400 00:30:45.694 Nvme6n1 : 0.72 183.88 11.49 89.15 0.00 203438.11 12822.26 237069.36 00:30:45.694 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.694 Job: Nvme7n1 ended in about 0.72 seconds with error 00:30:45.694 Verification LBA range: start 0x0 length 0x400 00:30:45.694 Nvme7n1 : 0.72 177.63 11.10 88.81 0.00 202905.75 36472.21 240716.58 00:30:45.694 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.694 Job: Nvme8n1 ended in about 0.69 seconds with error 00:30:45.694 Verification LBA range: start 0x0 length 0x400 00:30:45.694 Nvme8n1 : 0.69 192.45 12.03 93.31 0.00 181834.78 29177.77 217921.45 00:30:45.694 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.694 Job: Nvme9n1 ended in about 0.73 seconds with error 00:30:45.694 Verification LBA range: start 0x0 length 0x400 00:30:45.694 Nvme9n1 : 0.73 97.76 6.11 78.48 0.00 287992.88 19945.74 257129.07 00:30:45.694 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.694 Job: Nvme10n1 ended in about 0.71 seconds with error 00:30:45.694 Verification LBA range: start 0x0 length 0x400 00:30:45.694 Nvme10n1 : 0.71 90.68 5.67 90.68 0.00 271575.49 20629.59 277188.79 00:30:45.694 =================================================================================================================== 00:30:45.694 Total : 1575.70 98.48 901.84 0.00 227067.52 4388.06 286306.84 00:30:45.694 [2024-07-10 23:34:54.589218] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:45.694 [2024-07-10 23:34:54.589275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:45.694 [2024-07-10 23:34:54.589330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000330700 (9): Bad file descriptor 00:30:45.694 [2024-07-10 23:34:54.589348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e900 (9): Bad file descriptor 00:30:45.694 [2024-07-10 23:34:54.589361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032f080 (9): Bad file descriptor 00:30:45.694 [2024-07-10 23:34:54.589374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032f800 (9): Bad file descriptor 00:30:45.694 [2024-07-10 23:34:54.589386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ff80 (9): Bad file descriptor 00:30:45.694 [2024-07-10 23:34:54.589397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:45.694 [2024-07-10 23:34:54.589406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:45.694 [2024-07-10 23:34:54.589418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:45.694 [2024-07-10 23:34:54.589602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.694 [2024-07-10 23:34:54.589956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.694 [2024-07-10 23:34:54.589979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000330e80 with addr=10.0.0.2, port=4420 00:30:45.694 [2024-07-10 23:34:54.589993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000330e80 is same with the state(5) to be set 00:30:45.694 [2024-07-10 23:34:54.590005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:45.694 [2024-07-10 23:34:54.590015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:45.694 [2024-07-10 23:34:54.590025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:45.694 [2024-07-10 23:34:54.590042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:45.694 [2024-07-10 23:34:54.590050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:45.694 [2024-07-10 23:34:54.590060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:45.694 [2024-07-10 23:34:54.590074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:45.694 [2024-07-10 23:34:54.590084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:45.694 [2024-07-10 23:34:54.590092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:45.694 [2024-07-10 23:34:54.590106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:45.694 [2024-07-10 23:34:54.590115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:45.694 [2024-07-10 23:34:54.590124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:45.694 [2024-07-10 23:34:54.590137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:45.694 [2024-07-10 23:34:54.590147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:45.694 [2024-07-10 23:34:54.590156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:45.694 [2024-07-10 23:34:54.590209] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:45.694 [2024-07-10 23:34:54.590225] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:45.694 [2024-07-10 23:34:54.590241] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:45.694 [2024-07-10 23:34:54.590255] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:45.694 [2024-07-10 23:34:54.590266] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:45.694 [2024-07-10 23:34:54.590757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.694 [2024-07-10 23:34:54.590778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.694 [2024-07-10 23:34:54.590786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.694 [2024-07-10 23:34:54.590794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.694 [2024-07-10 23:34:54.590802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.694 [2024-07-10 23:34:54.590827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000330e80 (9): Bad file descriptor 00:30:45.694 [2024-07-10 23:34:54.590900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:45.694 [2024-07-10 23:34:54.590921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:45.694 [2024-07-10 23:34:54.590948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:45.694 [2024-07-10 23:34:54.590959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:45.694 [2024-07-10 23:34:54.590969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:45.694 [2024-07-10 23:34:54.591005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.694 [2024-07-10 23:34:54.591018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:45.694 [2024-07-10 23:34:54.591044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.694 [2024-07-10 23:34:54.591295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.694 [2024-07-10 23:34:54.591316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000331600 with addr=10.0.0.2, port=4420 00:30:45.694 [2024-07-10 23:34:54.591328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000331600 is same with the state(5) to be set 00:30:45.694 [2024-07-10 23:34:54.591582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.694 [2024-07-10 23:34:54.591598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032e180 with addr=10.0.0.2, port=4420 00:30:45.694 [2024-07-10 23:34:54.591609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e180 is same with the state(5) to be set 00:30:45.694 [2024-07-10 23:34:54.591870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.694 [2024-07-10 23:34:54.591886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:30:45.694 [2024-07-10 23:34:54.591897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:30:45.694 [2024-07-10 23:34:54.592071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.694 [2024-07-10 23:34:54.592087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032da00 with addr=10.0.0.2, port=4420 00:30:45.694 [2024-07-10 23:34:54.592098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032da00 is same with the state(5) to be set 00:30:45.694 [2024-07-10 23:34:54.592111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000331600 (9): Bad file descriptor 00:30:45.694 [2024-07-10 23:34:54.592125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e180 (9): Bad file descriptor 00:30:45.694 [2024-07-10 23:34:54.592176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:30:45.694 [2024-07-10 23:34:54.592191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032da00 (9): Bad file descriptor 00:30:45.694 [2024-07-10 23:34:54.592203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:45.694 [2024-07-10 23:34:54.592211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:45.694 [2024-07-10 23:34:54.592221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:45.694 [2024-07-10 23:34:54.592236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:45.694 [2024-07-10 23:34:54.592244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:45.694 [2024-07-10 23:34:54.592253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:45.694 [2024-07-10 23:34:54.592291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.694 [2024-07-10 23:34:54.592302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.694 [2024-07-10 23:34:54.592310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.694 [2024-07-10 23:34:54.592318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.694 [2024-07-10 23:34:54.592327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.694 [2024-07-10 23:34:54.592339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:45.694 [2024-07-10 23:34:54.592348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:45.694 [2024-07-10 23:34:54.592357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:45.694 [2024-07-10 23:34:54.592393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.694 [2024-07-10 23:34:54.592404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.983 23:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:30:48.983 23:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2558530 00:30:49.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2558530) - No such process 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:49.918 rmmod nvme_tcp 00:30:49.918 rmmod nvme_fabrics 00:30:49.918 rmmod nvme_keyring 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:49.918 23:34:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.455 23:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:52.455 00:30:52.455 real 0m11.834s 00:30:52.455 user 0m34.483s 00:30:52.455 sys 0m1.527s 00:30:52.455 23:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:52.455 23:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:52.455 ************************************ 00:30:52.455 END TEST nvmf_shutdown_tc3 00:30:52.455 ************************************ 00:30:52.455 23:35:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:30:52.455 23:35:00 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:30:52.455 00:30:52.455 real 0m46.640s 00:30:52.455 user 2m19.113s 00:30:52.455 sys 0m9.111s 00:30:52.455 23:35:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:52.455 23:35:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:52.455 ************************************ 00:30:52.455 END TEST nvmf_shutdown 00:30:52.455 ************************************ 00:30:52.455 23:35:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:52.455 23:35:00 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:30:52.455 23:35:00 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:52.455 23:35:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:52.455 23:35:01 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:30:52.455 23:35:01 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:52.455 23:35:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:52.455 23:35:01 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:30:52.455 23:35:01 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:52.455 23:35:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:52.455 23:35:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:52.455 23:35:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:52.455 ************************************ 00:30:52.455 START TEST nvmf_multicontroller 00:30:52.455 ************************************ 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:52.455 * Looking for test storage... 00:30:52.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.455 23:35:01 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:30:52.456 23:35:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:57.744 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:57.745 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:57.745 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:57.745 Found net devices under 0000:86:00.0: cvl_0_0 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:57.745 Found net devices under 0000:86:00.1: cvl_0_1 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:57.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:57.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:30:57.745 00:30:57.745 --- 10.0.0.2 ping statistics --- 00:30:57.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.745 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:57.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:57.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:30:57.745 00:30:57.745 --- 10.0.0.1 ping statistics --- 00:30:57.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.745 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2563144 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2563144 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2563144 ']' 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:57.745 23:35:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:57.745 [2024-07-10 23:35:06.451558] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:30:57.745 [2024-07-10 23:35:06.451659] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.745 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.745 [2024-07-10 23:35:06.557273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:57.745 [2024-07-10 23:35:06.774910] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.745 [2024-07-10 23:35:06.774949] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.745 [2024-07-10 23:35:06.774963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:57.745 [2024-07-10 23:35:06.774971] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:57.745 [2024-07-10 23:35:06.774980] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.745 [2024-07-10 23:35:06.775107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:57.745 [2024-07-10 23:35:06.775183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.745 [2024-07-10 23:35:06.775193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:58.313 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:58.313 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:30:58.313 23:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:58.313 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:58.313 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.313 23:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:58.313 23:35:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:58.313 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.313 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.313 [2024-07-10 23:35:07.282678] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.313 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.313 23:35:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:58.313 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.313 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.572 Malloc0 00:30:58.572 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.572 23:35:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:58.572 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.572 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.572 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.572 23:35:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:58.572 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.572 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.572 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.572 23:35:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.572 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.573 [2024-07-10 23:35:07.421688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.573 [2024-07-10 23:35:07.429620] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.573 Malloc1 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2563391 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2563391 /var/tmp/bdevperf.sock 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2563391 ']' 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:58.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:58.573 23:35:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.512 NVMe0n1 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.512 1 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.512 request: 00:30:59.512 { 00:30:59.512 "name": "NVMe0", 00:30:59.512 "trtype": "tcp", 00:30:59.512 "traddr": "10.0.0.2", 00:30:59.512 "adrfam": "ipv4", 00:30:59.512 "trsvcid": "4420", 00:30:59.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:59.512 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:59.512 "hostaddr": "10.0.0.2", 00:30:59.512 "hostsvcid": "60000", 00:30:59.512 "prchk_reftag": false, 00:30:59.512 "prchk_guard": false, 00:30:59.512 "hdgst": false, 00:30:59.512 "ddgst": false, 00:30:59.512 "method": "bdev_nvme_attach_controller", 00:30:59.512 "req_id": 1 00:30:59.512 } 00:30:59.512 Got JSON-RPC error response 00:30:59.512 response: 00:30:59.512 { 00:30:59.512 "code": -114, 00:30:59.512 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:59.512 } 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.512 request: 00:30:59.512 { 00:30:59.512 "name": "NVMe0", 00:30:59.512 "trtype": "tcp", 00:30:59.512 "traddr": "10.0.0.2", 00:30:59.512 "adrfam": "ipv4", 00:30:59.512 "trsvcid": "4420", 00:30:59.512 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:59.512 "hostaddr": "10.0.0.2", 00:30:59.512 "hostsvcid": "60000", 00:30:59.512 "prchk_reftag": false, 00:30:59.512 "prchk_guard": false, 00:30:59.512 "hdgst": false, 00:30:59.512 "ddgst": false, 00:30:59.512 "method": "bdev_nvme_attach_controller", 00:30:59.512 "req_id": 1 00:30:59.512 } 00:30:59.512 Got JSON-RPC error response 00:30:59.512 response: 00:30:59.512 { 00:30:59.512 "code": -114, 00:30:59.512 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:59.512 } 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.512 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.512 request: 00:30:59.512 { 00:30:59.512 "name": "NVMe0", 00:30:59.512 "trtype": "tcp", 00:30:59.512 "traddr": "10.0.0.2", 00:30:59.512 "adrfam": "ipv4", 00:30:59.512 "trsvcid": "4420", 00:30:59.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:59.512 "hostaddr": "10.0.0.2", 00:30:59.512 "hostsvcid": "60000", 00:30:59.512 "prchk_reftag": false, 00:30:59.512 "prchk_guard": false, 00:30:59.512 "hdgst": false, 00:30:59.512 "ddgst": false, 00:30:59.512 "multipath": "disable", 00:30:59.512 "method": "bdev_nvme_attach_controller", 00:30:59.512 "req_id": 1 00:30:59.512 } 00:30:59.512 Got JSON-RPC error response 00:30:59.512 response: 00:30:59.512 { 00:30:59.512 "code": -114, 00:30:59.512 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:30:59.513 } 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.513 request: 00:30:59.513 { 00:30:59.513 "name": "NVMe0", 00:30:59.513 "trtype": "tcp", 00:30:59.513 "traddr": "10.0.0.2", 00:30:59.513 "adrfam": "ipv4", 00:30:59.513 "trsvcid": "4420", 00:30:59.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:59.513 "hostaddr": "10.0.0.2", 00:30:59.513 "hostsvcid": "60000", 00:30:59.513 "prchk_reftag": false, 00:30:59.513 "prchk_guard": false, 00:30:59.513 "hdgst": false, 00:30:59.513 "ddgst": false, 00:30:59.513 "multipath": "failover", 00:30:59.513 "method": "bdev_nvme_attach_controller", 00:30:59.513 "req_id": 1 00:30:59.513 } 00:30:59.513 Got JSON-RPC error response 00:30:59.513 response: 00:30:59.513 { 00:30:59.513 "code": -114, 00:30:59.513 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:59.513 } 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:59.513 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:59.773 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:59.773 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:59.773 23:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:59.773 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.773 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.773 00:30:59.773 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.773 23:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:59.773 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.773 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.773 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.773 23:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:59.773 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.773 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.032 00:31:00.032 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.032 23:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:00.032 23:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:31:00.032 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.032 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.032 23:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.032 23:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:31:00.032 23:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:00.971 0 00:31:00.971 23:35:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:31:00.971 23:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.971 23:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.230 23:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.230 23:35:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2563391 00:31:01.230 23:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2563391 ']' 00:31:01.230 23:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2563391 00:31:01.230 23:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:31:01.230 23:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:01.230 23:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2563391 00:31:01.230 23:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:01.230 23:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:01.230 23:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2563391' 00:31:01.230 killing process with pid 2563391 00:31:01.230 23:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2563391 00:31:01.230 23:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2563391 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:31:02.209 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:02.209 [2024-07-10 23:35:07.622165] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:31:02.209 [2024-07-10 23:35:07.622260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2563391 ] 00:31:02.209 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.209 [2024-07-10 23:35:07.725665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.209 [2024-07-10 23:35:07.954522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.209 [2024-07-10 23:35:08.888251] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 137e487d-4084-4890-a5d7-570ae7f49baf already exists 00:31:02.209 [2024-07-10 23:35:08.888296] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:137e487d-4084-4890-a5d7-570ae7f49baf alias for bdev NVMe1n1 00:31:02.209 [2024-07-10 23:35:08.888309] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:31:02.209 Running I/O for 1 seconds... 00:31:02.209 00:31:02.209 Latency(us) 00:31:02.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.209 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:31:02.209 NVMe0n1 : 1.01 21318.27 83.27 0.00 0.00 5996.02 1894.85 10770.70 00:31:02.209 =================================================================================================================== 00:31:02.209 Total : 21318.27 83.27 0.00 0.00 5996.02 1894.85 10770.70 00:31:02.209 Received shutdown signal, test time was about 1.000000 seconds 00:31:02.209 00:31:02.209 Latency(us) 00:31:02.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.209 =================================================================================================================== 00:31:02.209 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:02.209 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:02.209 23:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:02.209 rmmod nvme_tcp 00:31:02.209 rmmod nvme_fabrics 00:31:02.209 rmmod nvme_keyring 00:31:02.479 23:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:02.479 23:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:31:02.479 23:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:31:02.479 23:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2563144 ']' 00:31:02.479 23:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2563144 00:31:02.479 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2563144 ']' 00:31:02.479 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2563144 00:31:02.479 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:31:02.479 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:02.479 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2563144 00:31:02.479 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:02.479 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:02.479 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2563144' 00:31:02.479 killing process with pid 2563144 00:31:02.479 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2563144 00:31:02.479 23:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2563144 00:31:04.386 23:35:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:04.386 23:35:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:04.386 23:35:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:04.386 23:35:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:04.386 23:35:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:04.386 23:35:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.386 23:35:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:04.386 23:35:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.291 23:35:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:06.291 00:31:06.291 real 0m14.037s 00:31:06.291 user 0m23.095s 00:31:06.291 sys 0m4.855s 00:31:06.291 23:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:06.291 23:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.291 ************************************ 00:31:06.291 END TEST nvmf_multicontroller 00:31:06.291 ************************************ 00:31:06.291 23:35:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:06.291 23:35:15 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:06.291 23:35:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:06.291 23:35:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:06.291 23:35:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:06.291 ************************************ 00:31:06.291 START TEST nvmf_aer 00:31:06.291 ************************************ 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:06.291 * Looking for test storage... 00:31:06.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.291 23:35:15 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:31:06.292 23:35:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:11.564 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:11.565 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:11.565 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:11.565 Found net devices under 0000:86:00.0: cvl_0_0 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:11.565 Found net devices under 0000:86:00.1: cvl_0_1 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:11.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:11.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:31:11.565 00:31:11.565 --- 10.0.0.2 ping statistics --- 00:31:11.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.565 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:11.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:11.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:31:11.565 00:31:11.565 --- 10.0.0.1 ping statistics --- 00:31:11.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.565 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2567619 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2567619 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2567619 ']' 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:11.565 23:35:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:11.565 [2024-07-10 23:35:20.512288] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:31:11.565 [2024-07-10 23:35:20.512378] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.565 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.565 [2024-07-10 23:35:20.621318] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:11.825 [2024-07-10 23:35:20.837053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:11.825 [2024-07-10 23:35:20.837096] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:11.825 [2024-07-10 23:35:20.837107] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:11.825 [2024-07-10 23:35:20.837115] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:11.825 [2024-07-10 23:35:20.837141] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:11.825 [2024-07-10 23:35:20.837255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.825 [2024-07-10 23:35:20.837270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:11.825 [2024-07-10 23:35:20.837355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.825 [2024-07-10 23:35:20.837365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:12.394 [2024-07-10 23:35:21.340263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:12.394 Malloc0 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.394 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:12.394 [2024-07-10 23:35:21.458876] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:12.654 [ 00:31:12.654 { 00:31:12.654 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:12.654 "subtype": "Discovery", 00:31:12.654 "listen_addresses": [], 00:31:12.654 "allow_any_host": true, 00:31:12.654 "hosts": [] 00:31:12.654 }, 00:31:12.654 { 00:31:12.654 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:12.654 "subtype": "NVMe", 00:31:12.654 "listen_addresses": [ 00:31:12.654 { 00:31:12.654 "trtype": "TCP", 00:31:12.654 "adrfam": "IPv4", 00:31:12.654 "traddr": "10.0.0.2", 00:31:12.654 "trsvcid": "4420" 00:31:12.654 } 00:31:12.654 ], 00:31:12.654 "allow_any_host": true, 00:31:12.654 "hosts": [], 00:31:12.654 "serial_number": "SPDK00000000000001", 00:31:12.654 "model_number": "SPDK bdev Controller", 00:31:12.654 "max_namespaces": 2, 00:31:12.654 "min_cntlid": 1, 00:31:12.654 "max_cntlid": 65519, 00:31:12.654 "namespaces": [ 00:31:12.654 { 00:31:12.654 "nsid": 1, 00:31:12.654 "bdev_name": "Malloc0", 00:31:12.654 "name": "Malloc0", 00:31:12.654 "nguid": "8B87CD3D58274668A9AF86B4DBC44FDD", 00:31:12.654 "uuid": "8b87cd3d-5827-4668-a9af-86b4dbc44fdd" 00:31:12.654 } 00:31:12.654 ] 00:31:12.654 } 00:31:12.654 ] 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2567678 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:12.654 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:31:12.654 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:12.913 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:12.913 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:12.913 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:31:12.913 23:35:21 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:31:12.913 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.913 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:12.913 Malloc1 00:31:12.913 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.913 23:35:21 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:31:12.913 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.913 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:13.173 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.173 23:35:21 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:31:13.173 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.173 23:35:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:13.173 [ 00:31:13.173 { 00:31:13.173 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:13.173 "subtype": "Discovery", 00:31:13.173 "listen_addresses": [], 00:31:13.173 "allow_any_host": true, 00:31:13.173 "hosts": [] 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:13.173 "subtype": "NVMe", 00:31:13.173 "listen_addresses": [ 00:31:13.173 { 00:31:13.173 "trtype": "TCP", 00:31:13.173 "adrfam": "IPv4", 00:31:13.173 "traddr": "10.0.0.2", 00:31:13.173 "trsvcid": "4420" 00:31:13.173 } 00:31:13.173 ], 00:31:13.173 "allow_any_host": true, 00:31:13.173 "hosts": [], 00:31:13.173 "serial_number": "SPDK00000000000001", 00:31:13.173 "model_number": "SPDK bdev Controller", 00:31:13.173 "max_namespaces": 2, 00:31:13.173 "min_cntlid": 1, 00:31:13.173 "max_cntlid": 65519, 00:31:13.173 "namespaces": [ 00:31:13.173 { 00:31:13.173 "nsid": 1, 00:31:13.173 "bdev_name": "Malloc0", 00:31:13.173 "name": "Malloc0", 00:31:13.173 "nguid": "8B87CD3D58274668A9AF86B4DBC44FDD", 00:31:13.173 "uuid": "8b87cd3d-5827-4668-a9af-86b4dbc44fdd" 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "nsid": 2, 00:31:13.173 "bdev_name": "Malloc1", 00:31:13.173 "name": "Malloc1", 00:31:13.173 "nguid": "2753DC0D5D1F4F9094201B478B3C502E", 00:31:13.173 "uuid": "2753dc0d-5d1f-4f90-9420-1b478b3c502e" 00:31:13.173 } 00:31:13.173 ] 00:31:13.173 } 00:31:13.173 ] 00:31:13.173 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.173 23:35:22 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2567678 00:31:13.173 Asynchronous Event Request test 00:31:13.173 Attaching to 10.0.0.2 00:31:13.173 Attached to 10.0.0.2 00:31:13.173 Registering asynchronous event callbacks... 00:31:13.173 Starting namespace attribute notice tests for all controllers... 00:31:13.173 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:31:13.173 aer_cb - Changed Namespace 00:31:13.173 Cleaning up... 00:31:13.173 23:35:22 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:31:13.173 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.173 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:13.433 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.433 23:35:22 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:31:13.433 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.433 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:13.433 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.433 23:35:22 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:13.433 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.433 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:13.433 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.433 23:35:22 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:31:13.433 23:35:22 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:31:13.433 23:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:13.433 23:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:31:13.433 23:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:13.433 23:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:31:13.433 23:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:13.433 23:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:13.433 rmmod nvme_tcp 00:31:13.433 rmmod nvme_fabrics 00:31:13.433 rmmod nvme_keyring 00:31:13.693 23:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:13.693 23:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:31:13.693 23:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:31:13.693 23:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2567619 ']' 00:31:13.693 23:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2567619 00:31:13.693 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2567619 ']' 00:31:13.693 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2567619 00:31:13.693 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:31:13.693 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:13.693 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2567619 00:31:13.693 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:13.693 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:13.693 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2567619' 00:31:13.693 killing process with pid 2567619 00:31:13.693 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2567619 00:31:13.693 23:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2567619 00:31:15.072 23:35:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:15.072 23:35:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:15.072 23:35:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:15.072 23:35:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:15.072 23:35:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:15.072 23:35:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.072 23:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:15.072 23:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.980 23:35:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:16.980 00:31:16.980 real 0m10.784s 00:31:16.980 user 0m12.079s 00:31:16.980 sys 0m4.537s 00:31:16.980 23:35:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:16.980 23:35:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.980 ************************************ 00:31:16.980 END TEST nvmf_aer 00:31:16.980 ************************************ 00:31:16.980 23:35:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:16.980 23:35:25 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:16.980 23:35:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:16.980 23:35:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:16.980 23:35:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:16.980 ************************************ 00:31:16.980 START TEST nvmf_async_init 00:31:16.980 ************************************ 00:31:16.980 23:35:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:17.240 * Looking for test storage... 00:31:17.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:17.240 23:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.240 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:31:17.240 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.240 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.240 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.240 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.240 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.240 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.240 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.240 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.240 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.240 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.240 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:17.240 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:17.240 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.240 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.240 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0aab8b91a7bf40caaec3752ca251734f 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:31:17.241 23:35:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:22.517 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:22.517 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:22.517 Found net devices under 0000:86:00.0: cvl_0_0 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:22.517 Found net devices under 0000:86:00.1: cvl_0_1 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:22.517 23:35:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:22.517 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:22.517 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:22.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:22.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:31:22.517 00:31:22.517 --- 10.0.0.2 ping statistics --- 00:31:22.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.517 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:31:22.517 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:22.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:22.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:31:22.517 00:31:22.517 --- 10.0.0.1 ping statistics --- 00:31:22.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.517 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:31:22.517 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:22.517 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:31:22.517 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:22.517 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:22.517 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:22.517 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:22.517 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:22.517 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:22.517 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:22.517 23:35:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:22.517 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:22.518 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:22.518 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:22.518 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2571397 00:31:22.518 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2571397 00:31:22.518 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2571397 ']' 00:31:22.518 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.518 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:22.518 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.518 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:22.518 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:22.518 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:22.518 [2024-07-10 23:35:31.136819] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:31:22.518 [2024-07-10 23:35:31.136921] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.518 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.518 [2024-07-10 23:35:31.246778] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.518 [2024-07-10 23:35:31.456460] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.518 [2024-07-10 23:35:31.456502] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.518 [2024-07-10 23:35:31.456514] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:22.518 [2024-07-10 23:35:31.456524] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:22.518 [2024-07-10 23:35:31.456533] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.518 [2024-07-10 23:35:31.456567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.087 [2024-07-10 23:35:31.934530] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.087 null0 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0aab8b91a7bf40caaec3752ca251734f 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.087 [2024-07-10 23:35:31.974773] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.087 23:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.346 nvme0n1 00:31:23.346 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.346 23:35:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.347 [ 00:31:23.347 { 00:31:23.347 "name": "nvme0n1", 00:31:23.347 "aliases": [ 00:31:23.347 "0aab8b91-a7bf-40ca-aec3-752ca251734f" 00:31:23.347 ], 00:31:23.347 "product_name": "NVMe disk", 00:31:23.347 "block_size": 512, 00:31:23.347 "num_blocks": 2097152, 00:31:23.347 "uuid": "0aab8b91-a7bf-40ca-aec3-752ca251734f", 00:31:23.347 "assigned_rate_limits": { 00:31:23.347 "rw_ios_per_sec": 0, 00:31:23.347 "rw_mbytes_per_sec": 0, 00:31:23.347 "r_mbytes_per_sec": 0, 00:31:23.347 "w_mbytes_per_sec": 0 00:31:23.347 }, 00:31:23.347 "claimed": false, 00:31:23.347 "zoned": false, 00:31:23.347 "supported_io_types": { 00:31:23.347 "read": true, 00:31:23.347 "write": true, 00:31:23.347 "unmap": false, 00:31:23.347 "flush": true, 00:31:23.347 "reset": true, 00:31:23.347 "nvme_admin": true, 00:31:23.347 "nvme_io": true, 00:31:23.347 "nvme_io_md": false, 00:31:23.347 "write_zeroes": true, 00:31:23.347 "zcopy": false, 00:31:23.347 "get_zone_info": false, 00:31:23.347 "zone_management": false, 00:31:23.347 "zone_append": false, 00:31:23.347 "compare": true, 00:31:23.347 "compare_and_write": true, 00:31:23.347 "abort": true, 00:31:23.347 "seek_hole": false, 00:31:23.347 "seek_data": false, 00:31:23.347 "copy": true, 00:31:23.347 "nvme_iov_md": false 00:31:23.347 }, 00:31:23.347 "memory_domains": [ 00:31:23.347 { 00:31:23.347 "dma_device_id": "system", 00:31:23.347 "dma_device_type": 1 00:31:23.347 } 00:31:23.347 ], 00:31:23.347 "driver_specific": { 00:31:23.347 "nvme": [ 00:31:23.347 { 00:31:23.347 "trid": { 00:31:23.347 "trtype": "TCP", 00:31:23.347 "adrfam": "IPv4", 00:31:23.347 "traddr": "10.0.0.2", 00:31:23.347 "trsvcid": "4420", 00:31:23.347 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:23.347 }, 00:31:23.347 "ctrlr_data": { 00:31:23.347 "cntlid": 1, 00:31:23.347 "vendor_id": "0x8086", 00:31:23.347 "model_number": "SPDK bdev Controller", 00:31:23.347 "serial_number": "00000000000000000000", 00:31:23.347 "firmware_revision": "24.09", 00:31:23.347 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:23.347 "oacs": { 00:31:23.347 "security": 0, 00:31:23.347 "format": 0, 00:31:23.347 "firmware": 0, 00:31:23.347 "ns_manage": 0 00:31:23.347 }, 00:31:23.347 "multi_ctrlr": true, 00:31:23.347 "ana_reporting": false 00:31:23.347 }, 00:31:23.347 "vs": { 00:31:23.347 "nvme_version": "1.3" 00:31:23.347 }, 00:31:23.347 "ns_data": { 00:31:23.347 "id": 1, 00:31:23.347 "can_share": true 00:31:23.347 } 00:31:23.347 } 00:31:23.347 ], 00:31:23.347 "mp_policy": "active_passive" 00:31:23.347 } 00:31:23.347 } 00:31:23.347 ] 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.347 [2024-07-10 23:35:32.223585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:23.347 [2024-07-10 23:35:32.223683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d500 (9): Bad file descriptor 00:31:23.347 [2024-07-10 23:35:32.355284] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.347 [ 00:31:23.347 { 00:31:23.347 "name": "nvme0n1", 00:31:23.347 "aliases": [ 00:31:23.347 "0aab8b91-a7bf-40ca-aec3-752ca251734f" 00:31:23.347 ], 00:31:23.347 "product_name": "NVMe disk", 00:31:23.347 "block_size": 512, 00:31:23.347 "num_blocks": 2097152, 00:31:23.347 "uuid": "0aab8b91-a7bf-40ca-aec3-752ca251734f", 00:31:23.347 "assigned_rate_limits": { 00:31:23.347 "rw_ios_per_sec": 0, 00:31:23.347 "rw_mbytes_per_sec": 0, 00:31:23.347 "r_mbytes_per_sec": 0, 00:31:23.347 "w_mbytes_per_sec": 0 00:31:23.347 }, 00:31:23.347 "claimed": false, 00:31:23.347 "zoned": false, 00:31:23.347 "supported_io_types": { 00:31:23.347 "read": true, 00:31:23.347 "write": true, 00:31:23.347 "unmap": false, 00:31:23.347 "flush": true, 00:31:23.347 "reset": true, 00:31:23.347 "nvme_admin": true, 00:31:23.347 "nvme_io": true, 00:31:23.347 "nvme_io_md": false, 00:31:23.347 "write_zeroes": true, 00:31:23.347 "zcopy": false, 00:31:23.347 "get_zone_info": false, 00:31:23.347 "zone_management": false, 00:31:23.347 "zone_append": false, 00:31:23.347 "compare": true, 00:31:23.347 "compare_and_write": true, 00:31:23.347 "abort": true, 00:31:23.347 "seek_hole": false, 00:31:23.347 "seek_data": false, 00:31:23.347 "copy": true, 00:31:23.347 "nvme_iov_md": false 00:31:23.347 }, 00:31:23.347 "memory_domains": [ 00:31:23.347 { 00:31:23.347 "dma_device_id": "system", 00:31:23.347 "dma_device_type": 1 00:31:23.347 } 00:31:23.347 ], 00:31:23.347 "driver_specific": { 00:31:23.347 "nvme": [ 00:31:23.347 { 00:31:23.347 "trid": { 00:31:23.347 "trtype": "TCP", 00:31:23.347 "adrfam": "IPv4", 00:31:23.347 "traddr": "10.0.0.2", 00:31:23.347 "trsvcid": "4420", 00:31:23.347 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:23.347 }, 00:31:23.347 "ctrlr_data": { 00:31:23.347 "cntlid": 2, 00:31:23.347 "vendor_id": "0x8086", 00:31:23.347 "model_number": "SPDK bdev Controller", 00:31:23.347 "serial_number": "00000000000000000000", 00:31:23.347 "firmware_revision": "24.09", 00:31:23.347 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:23.347 "oacs": { 00:31:23.347 "security": 0, 00:31:23.347 "format": 0, 00:31:23.347 "firmware": 0, 00:31:23.347 "ns_manage": 0 00:31:23.347 }, 00:31:23.347 "multi_ctrlr": true, 00:31:23.347 "ana_reporting": false 00:31:23.347 }, 00:31:23.347 "vs": { 00:31:23.347 "nvme_version": "1.3" 00:31:23.347 }, 00:31:23.347 "ns_data": { 00:31:23.347 "id": 1, 00:31:23.347 "can_share": true 00:31:23.347 } 00:31:23.347 } 00:31:23.347 ], 00:31:23.347 "mp_policy": "active_passive" 00:31:23.347 } 00:31:23.347 } 00:31:23.347 ] 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.pVh92aFkBj 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.pVh92aFkBj 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.347 [2024-07-10 23:35:32.404168] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:23.347 [2024-07-10 23:35:32.404303] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pVh92aFkBj 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.347 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.347 [2024-07-10 23:35:32.412188] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:23.607 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.607 23:35:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pVh92aFkBj 00:31:23.607 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.607 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.607 [2024-07-10 23:35:32.420226] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:23.607 [2024-07-10 23:35:32.420310] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:23.607 nvme0n1 00:31:23.607 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.607 23:35:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:23.607 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.607 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.607 [ 00:31:23.607 { 00:31:23.607 "name": "nvme0n1", 00:31:23.607 "aliases": [ 00:31:23.607 "0aab8b91-a7bf-40ca-aec3-752ca251734f" 00:31:23.607 ], 00:31:23.607 "product_name": "NVMe disk", 00:31:23.607 "block_size": 512, 00:31:23.607 "num_blocks": 2097152, 00:31:23.607 "uuid": "0aab8b91-a7bf-40ca-aec3-752ca251734f", 00:31:23.607 "assigned_rate_limits": { 00:31:23.607 "rw_ios_per_sec": 0, 00:31:23.607 "rw_mbytes_per_sec": 0, 00:31:23.607 "r_mbytes_per_sec": 0, 00:31:23.607 "w_mbytes_per_sec": 0 00:31:23.607 }, 00:31:23.607 "claimed": false, 00:31:23.607 "zoned": false, 00:31:23.607 "supported_io_types": { 00:31:23.607 "read": true, 00:31:23.607 "write": true, 00:31:23.607 "unmap": false, 00:31:23.607 "flush": true, 00:31:23.607 "reset": true, 00:31:23.607 "nvme_admin": true, 00:31:23.607 "nvme_io": true, 00:31:23.607 "nvme_io_md": false, 00:31:23.607 "write_zeroes": true, 00:31:23.607 "zcopy": false, 00:31:23.607 "get_zone_info": false, 00:31:23.607 "zone_management": false, 00:31:23.607 "zone_append": false, 00:31:23.607 "compare": true, 00:31:23.607 "compare_and_write": true, 00:31:23.607 "abort": true, 00:31:23.607 "seek_hole": false, 00:31:23.607 "seek_data": false, 00:31:23.607 "copy": true, 00:31:23.607 "nvme_iov_md": false 00:31:23.607 }, 00:31:23.607 "memory_domains": [ 00:31:23.607 { 00:31:23.607 "dma_device_id": "system", 00:31:23.607 "dma_device_type": 1 00:31:23.607 } 00:31:23.607 ], 00:31:23.607 "driver_specific": { 00:31:23.607 "nvme": [ 00:31:23.607 { 00:31:23.608 "trid": { 00:31:23.608 "trtype": "TCP", 00:31:23.608 "adrfam": "IPv4", 00:31:23.608 "traddr": "10.0.0.2", 00:31:23.608 "trsvcid": "4421", 00:31:23.608 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:23.608 }, 00:31:23.608 "ctrlr_data": { 00:31:23.608 "cntlid": 3, 00:31:23.608 "vendor_id": "0x8086", 00:31:23.608 "model_number": "SPDK bdev Controller", 00:31:23.608 "serial_number": "00000000000000000000", 00:31:23.608 "firmware_revision": "24.09", 00:31:23.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:23.608 "oacs": { 00:31:23.608 "security": 0, 00:31:23.608 "format": 0, 00:31:23.608 "firmware": 0, 00:31:23.608 "ns_manage": 0 00:31:23.608 }, 00:31:23.608 "multi_ctrlr": true, 00:31:23.608 "ana_reporting": false 00:31:23.608 }, 00:31:23.608 "vs": { 00:31:23.608 "nvme_version": "1.3" 00:31:23.608 }, 00:31:23.608 "ns_data": { 00:31:23.608 "id": 1, 00:31:23.608 "can_share": true 00:31:23.608 } 00:31:23.608 } 00:31:23.608 ], 00:31:23.608 "mp_policy": "active_passive" 00:31:23.608 } 00:31:23.608 } 00:31:23.608 ] 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.pVh92aFkBj 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:23.608 rmmod nvme_tcp 00:31:23.608 rmmod nvme_fabrics 00:31:23.608 rmmod nvme_keyring 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2571397 ']' 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2571397 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2571397 ']' 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2571397 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2571397 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2571397' 00:31:23.608 killing process with pid 2571397 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2571397 00:31:23.608 [2024-07-10 23:35:32.605463] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:23.608 [2024-07-10 23:35:32.605495] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:23.608 23:35:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2571397 00:31:24.986 23:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:24.986 23:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:24.986 23:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:24.986 23:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:24.986 23:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:24.986 23:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.986 23:35:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:24.986 23:35:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.892 23:35:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:26.892 00:31:26.892 real 0m9.909s 00:31:26.892 user 0m4.177s 00:31:26.892 sys 0m4.161s 00:31:26.892 23:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:26.892 23:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:26.892 ************************************ 00:31:26.892 END TEST nvmf_async_init 00:31:26.892 ************************************ 00:31:26.892 23:35:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:26.892 23:35:35 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:26.892 23:35:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:26.892 23:35:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:26.892 23:35:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:27.151 ************************************ 00:31:27.151 START TEST dma 00:31:27.151 ************************************ 00:31:27.151 23:35:35 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:27.151 * Looking for test storage... 00:31:27.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:27.151 23:35:36 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.151 23:35:36 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.151 23:35:36 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.151 23:35:36 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.151 23:35:36 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.151 23:35:36 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.151 23:35:36 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.151 23:35:36 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:31:27.151 23:35:36 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:27.151 23:35:36 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:27.151 23:35:36 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:27.151 23:35:36 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:31:27.151 00:31:27.151 real 0m0.121s 00:31:27.151 user 0m0.060s 00:31:27.151 sys 0m0.069s 00:31:27.151 23:35:36 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:27.151 23:35:36 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:31:27.151 ************************************ 00:31:27.151 END TEST dma 00:31:27.151 ************************************ 00:31:27.151 23:35:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:27.151 23:35:36 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:27.151 23:35:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:27.151 23:35:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:27.151 23:35:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:27.151 ************************************ 00:31:27.151 START TEST nvmf_identify 00:31:27.151 ************************************ 00:31:27.151 23:35:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:27.412 * Looking for test storage... 00:31:27.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:31:27.412 23:35:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:32.686 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:32.686 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:31:32.686 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:32.686 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:32.686 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:32.686 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:32.686 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:32.686 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:31:32.686 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:32.686 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:31:32.686 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:31:32.686 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:31:32.686 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:31:32.686 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:32.687 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:32.687 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:32.687 Found net devices under 0000:86:00.0: cvl_0_0 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:32.687 Found net devices under 0000:86:00.1: cvl_0_1 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:32.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:31:32.687 00:31:32.687 --- 10.0.0.2 ping statistics --- 00:31:32.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.687 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:31:32.687 00:31:32.687 --- 10.0.0.1 ping statistics --- 00:31:32.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.687 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2575215 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2575215 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2575215 ']' 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:32.687 23:35:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:32.687 [2024-07-10 23:35:41.476405] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:31:32.687 [2024-07-10 23:35:41.476532] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:32.687 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.687 [2024-07-10 23:35:41.585554] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:32.947 [2024-07-10 23:35:41.815596] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:32.947 [2024-07-10 23:35:41.815636] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:32.947 [2024-07-10 23:35:41.815648] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:32.947 [2024-07-10 23:35:41.815657] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:32.947 [2024-07-10 23:35:41.815666] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:32.947 [2024-07-10 23:35:41.815739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.947 [2024-07-10 23:35:41.815756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:32.947 [2024-07-10 23:35:41.815781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.947 [2024-07-10 23:35:41.815786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:33.209 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:33.209 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:31:33.209 23:35:42 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:33.209 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.209 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:33.209 [2024-07-10 23:35:42.264283] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:33.209 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.209 23:35:42 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:33.209 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:33.209 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:33.469 Malloc0 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:33.469 [2024-07-10 23:35:42.413541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:33.469 [ 00:31:33.469 { 00:31:33.469 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:33.469 "subtype": "Discovery", 00:31:33.469 "listen_addresses": [ 00:31:33.469 { 00:31:33.469 "trtype": "TCP", 00:31:33.469 "adrfam": "IPv4", 00:31:33.469 "traddr": "10.0.0.2", 00:31:33.469 "trsvcid": "4420" 00:31:33.469 } 00:31:33.469 ], 00:31:33.469 "allow_any_host": true, 00:31:33.469 "hosts": [] 00:31:33.469 }, 00:31:33.469 { 00:31:33.469 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:33.469 "subtype": "NVMe", 00:31:33.469 "listen_addresses": [ 00:31:33.469 { 00:31:33.469 "trtype": "TCP", 00:31:33.469 "adrfam": "IPv4", 00:31:33.469 "traddr": "10.0.0.2", 00:31:33.469 "trsvcid": "4420" 00:31:33.469 } 00:31:33.469 ], 00:31:33.469 "allow_any_host": true, 00:31:33.469 "hosts": [], 00:31:33.469 "serial_number": "SPDK00000000000001", 00:31:33.469 "model_number": "SPDK bdev Controller", 00:31:33.469 "max_namespaces": 32, 00:31:33.469 "min_cntlid": 1, 00:31:33.469 "max_cntlid": 65519, 00:31:33.469 "namespaces": [ 00:31:33.469 { 00:31:33.469 "nsid": 1, 00:31:33.469 "bdev_name": "Malloc0", 00:31:33.469 "name": "Malloc0", 00:31:33.469 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:33.469 "eui64": "ABCDEF0123456789", 00:31:33.469 "uuid": "b9396b93-4857-41aa-ae58-1a98377ab5c5" 00:31:33.469 } 00:31:33.469 ] 00:31:33.469 } 00:31:33.469 ] 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.469 23:35:42 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:33.469 [2024-07-10 23:35:42.483800] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:31:33.469 [2024-07-10 23:35:42.483863] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575458 ] 00:31:33.469 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.469 [2024-07-10 23:35:42.526626] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:31:33.469 [2024-07-10 23:35:42.526726] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:33.469 [2024-07-10 23:35:42.526739] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:33.469 [2024-07-10 23:35:42.526759] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:33.469 [2024-07-10 23:35:42.526774] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:33.469 [2024-07-10 23:35:42.530211] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:31:33.469 [2024-07-10 23:35:42.530262] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500001db80 0 00:31:33.735 [2024-07-10 23:35:42.538181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:33.735 [2024-07-10 23:35:42.538204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:33.735 [2024-07-10 23:35:42.538212] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:33.735 [2024-07-10 23:35:42.538220] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:33.735 [2024-07-10 23:35:42.538271] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.538280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.538289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.735 [2024-07-10 23:35:42.538310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:33.735 [2024-07-10 23:35:42.538332] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.735 [2024-07-10 23:35:42.545177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.735 [2024-07-10 23:35:42.545199] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.735 [2024-07-10 23:35:42.545205] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.545212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.735 [2024-07-10 23:35:42.545232] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:33.735 [2024-07-10 23:35:42.545245] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:31:33.735 [2024-07-10 23:35:42.545253] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:31:33.735 [2024-07-10 23:35:42.545274] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.545283] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.545292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.735 [2024-07-10 23:35:42.545305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.735 [2024-07-10 23:35:42.545326] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.735 [2024-07-10 23:35:42.545533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.735 [2024-07-10 23:35:42.545545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.735 [2024-07-10 23:35:42.545551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.545557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.735 [2024-07-10 23:35:42.545566] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:31:33.735 [2024-07-10 23:35:42.545580] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:31:33.735 [2024-07-10 23:35:42.545591] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.545597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.545603] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.735 [2024-07-10 23:35:42.545616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.735 [2024-07-10 23:35:42.545635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.735 [2024-07-10 23:35:42.545721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.735 [2024-07-10 23:35:42.545730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.735 [2024-07-10 23:35:42.545735] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.545740] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.735 [2024-07-10 23:35:42.545747] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:31:33.735 [2024-07-10 23:35:42.545758] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:31:33.735 [2024-07-10 23:35:42.545772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.545779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.545787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.735 [2024-07-10 23:35:42.545797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.735 [2024-07-10 23:35:42.545812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.735 [2024-07-10 23:35:42.545891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.735 [2024-07-10 23:35:42.545900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.735 [2024-07-10 23:35:42.545904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.545910] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.735 [2024-07-10 23:35:42.545917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:33.735 [2024-07-10 23:35:42.545929] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.545936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.545941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.735 [2024-07-10 23:35:42.545951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.735 [2024-07-10 23:35:42.545966] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.735 [2024-07-10 23:35:42.546048] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.735 [2024-07-10 23:35:42.546057] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.735 [2024-07-10 23:35:42.546062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.546067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.735 [2024-07-10 23:35:42.546074] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:31:33.735 [2024-07-10 23:35:42.546083] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:31:33.735 [2024-07-10 23:35:42.546096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:33.735 [2024-07-10 23:35:42.546204] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:31:33.735 [2024-07-10 23:35:42.546211] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:33.735 [2024-07-10 23:35:42.546223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.546231] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.546237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.735 [2024-07-10 23:35:42.546247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.735 [2024-07-10 23:35:42.546265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.735 [2024-07-10 23:35:42.546379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.735 [2024-07-10 23:35:42.546387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.735 [2024-07-10 23:35:42.546392] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.546397] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.735 [2024-07-10 23:35:42.546404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:33.735 [2024-07-10 23:35:42.546425] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.546431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.546437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.735 [2024-07-10 23:35:42.546446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.735 [2024-07-10 23:35:42.546461] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.735 [2024-07-10 23:35:42.546549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.735 [2024-07-10 23:35:42.546558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.735 [2024-07-10 23:35:42.546562] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.546568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.735 [2024-07-10 23:35:42.546574] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:33.735 [2024-07-10 23:35:42.546581] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:31:33.735 [2024-07-10 23:35:42.546592] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:31:33.735 [2024-07-10 23:35:42.546605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:31:33.735 [2024-07-10 23:35:42.546621] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.546630] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.735 [2024-07-10 23:35:42.546643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.735 [2024-07-10 23:35:42.546657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.735 [2024-07-10 23:35:42.546836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:33.735 [2024-07-10 23:35:42.546846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:33.735 [2024-07-10 23:35:42.546852] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.546858] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=0 00:31:33.735 [2024-07-10 23:35:42.546865] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:33.735 [2024-07-10 23:35:42.546872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.546884] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.546891] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.546906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.735 [2024-07-10 23:35:42.546914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.735 [2024-07-10 23:35:42.546919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.546924] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.735 [2024-07-10 23:35:42.546943] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:31:33.735 [2024-07-10 23:35:42.546951] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:31:33.735 [2024-07-10 23:35:42.546959] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:31:33.735 [2024-07-10 23:35:42.546967] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:31:33.735 [2024-07-10 23:35:42.546975] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:31:33.735 [2024-07-10 23:35:42.546982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:31:33.735 [2024-07-10 23:35:42.546994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:31:33.735 [2024-07-10 23:35:42.547006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547012] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.735 [2024-07-10 23:35:42.547029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:33.735 [2024-07-10 23:35:42.547046] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.735 [2024-07-10 23:35:42.547131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.735 [2024-07-10 23:35:42.547140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.735 [2024-07-10 23:35:42.547144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547150] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.735 [2024-07-10 23:35:42.547166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547172] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547180] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.735 [2024-07-10 23:35:42.547192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.735 [2024-07-10 23:35:42.547201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547207] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547212] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500001db80) 00:31:33.735 [2024-07-10 23:35:42.547220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.735 [2024-07-10 23:35:42.547227] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547232] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500001db80) 00:31:33.735 [2024-07-10 23:35:42.547245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.735 [2024-07-10 23:35:42.547255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547267] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.735 [2024-07-10 23:35:42.547275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.735 [2024-07-10 23:35:42.547281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:31:33.735 [2024-07-10 23:35:42.547295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:33.735 [2024-07-10 23:35:42.547308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547314] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:33.735 [2024-07-10 23:35:42.547324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.735 [2024-07-10 23:35:42.547343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.735 [2024-07-10 23:35:42.547351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:33.735 [2024-07-10 23:35:42.547357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:33.735 [2024-07-10 23:35:42.547363] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.735 [2024-07-10 23:35:42.547369] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:33.735 [2024-07-10 23:35:42.547494] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.735 [2024-07-10 23:35:42.547506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.735 [2024-07-10 23:35:42.547510] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:33.735 [2024-07-10 23:35:42.547523] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:31:33.735 [2024-07-10 23:35:42.547531] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:31:33.735 [2024-07-10 23:35:42.547549] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:33.735 [2024-07-10 23:35:42.547566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.735 [2024-07-10 23:35:42.547580] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:33.735 [2024-07-10 23:35:42.547694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:33.735 [2024-07-10 23:35:42.547704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:33.735 [2024-07-10 23:35:42.547709] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547715] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:33.735 [2024-07-10 23:35:42.547726] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:33.735 [2024-07-10 23:35:42.547733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547742] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547748] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.735 [2024-07-10 23:35:42.547777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.735 [2024-07-10 23:35:42.547782] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547790] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:33.735 [2024-07-10 23:35:42.547810] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:31:33.735 [2024-07-10 23:35:42.547851] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547859] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:33.735 [2024-07-10 23:35:42.547872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.735 [2024-07-10 23:35:42.547880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.547892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:33.735 [2024-07-10 23:35:42.547903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.735 [2024-07-10 23:35:42.547919] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:33.735 [2024-07-10 23:35:42.547926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:33.735 [2024-07-10 23:35:42.548117] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:33.735 [2024-07-10 23:35:42.548127] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:33.735 [2024-07-10 23:35:42.548132] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.548138] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=1024, cccid=4 00:31:33.735 [2024-07-10 23:35:42.548144] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=1024 00:31:33.735 [2024-07-10 23:35:42.548153] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.548170] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.548176] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:33.735 [2024-07-10 23:35:42.548185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.736 [2024-07-10 23:35:42.548193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.736 [2024-07-10 23:35:42.548198] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.736 [2024-07-10 23:35:42.548203] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:33.736 [2024-07-10 23:35:42.593173] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.736 [2024-07-10 23:35:42.593194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.736 [2024-07-10 23:35:42.593199] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.736 [2024-07-10 23:35:42.593206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:33.736 [2024-07-10 23:35:42.593231] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.736 [2024-07-10 23:35:42.593239] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:33.736 [2024-07-10 23:35:42.593251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.736 [2024-07-10 23:35:42.593275] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:33.736 [2024-07-10 23:35:42.593483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:33.736 [2024-07-10 23:35:42.593496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:33.736 [2024-07-10 23:35:42.593501] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:33.736 [2024-07-10 23:35:42.593506] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=3072, cccid=4 00:31:33.736 [2024-07-10 23:35:42.593512] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=3072 00:31:33.736 [2024-07-10 23:35:42.593518] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.736 [2024-07-10 23:35:42.593527] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:33.736 [2024-07-10 23:35:42.593532] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:33.736 [2024-07-10 23:35:42.593555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.736 [2024-07-10 23:35:42.593563] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.736 [2024-07-10 23:35:42.593568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.736 [2024-07-10 23:35:42.593573] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:33.736 [2024-07-10 23:35:42.593588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.736 [2024-07-10 23:35:42.593596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:33.736 [2024-07-10 23:35:42.593610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.736 [2024-07-10 23:35:42.593631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:33.736 [2024-07-10 23:35:42.593773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:33.736 [2024-07-10 23:35:42.593780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:33.736 [2024-07-10 23:35:42.593785] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:33.736 [2024-07-10 23:35:42.593790] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=8, cccid=4 00:31:33.736 [2024-07-10 23:35:42.593796] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=8 00:31:33.736 [2024-07-10 23:35:42.593802] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.736 [2024-07-10 23:35:42.593810] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:33.736 [2024-07-10 23:35:42.593815] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:33.736 [2024-07-10 23:35:42.634329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.736 [2024-07-10 23:35:42.634348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.736 [2024-07-10 23:35:42.634353] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.736 [2024-07-10 23:35:42.634359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:33.736 ===================================================== 00:31:33.736 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:33.736 ===================================================== 00:31:33.736 Controller Capabilities/Features 00:31:33.736 ================================ 00:31:33.736 Vendor ID: 0000 00:31:33.736 Subsystem Vendor ID: 0000 00:31:33.736 Serial Number: .................... 00:31:33.736 Model Number: ........................................ 00:31:33.736 Firmware Version: 24.09 00:31:33.736 Recommended Arb Burst: 0 00:31:33.736 IEEE OUI Identifier: 00 00 00 00:31:33.736 Multi-path I/O 00:31:33.736 May have multiple subsystem ports: No 00:31:33.736 May have multiple controllers: No 00:31:33.736 Associated with SR-IOV VF: No 00:31:33.736 Max Data Transfer Size: 131072 00:31:33.736 Max Number of Namespaces: 0 00:31:33.736 Max Number of I/O Queues: 1024 00:31:33.736 NVMe Specification Version (VS): 1.3 00:31:33.736 NVMe Specification Version (Identify): 1.3 00:31:33.736 Maximum Queue Entries: 128 00:31:33.736 Contiguous Queues Required: Yes 00:31:33.736 Arbitration Mechanisms Supported 00:31:33.736 Weighted Round Robin: Not Supported 00:31:33.736 Vendor Specific: Not Supported 00:31:33.736 Reset Timeout: 15000 ms 00:31:33.736 Doorbell Stride: 4 bytes 00:31:33.736 NVM Subsystem Reset: Not Supported 00:31:33.736 Command Sets Supported 00:31:33.736 NVM Command Set: Supported 00:31:33.736 Boot Partition: Not Supported 00:31:33.736 Memory Page Size Minimum: 4096 bytes 00:31:33.736 Memory Page Size Maximum: 4096 bytes 00:31:33.736 Persistent Memory Region: Not Supported 00:31:33.736 Optional Asynchronous Events Supported 00:31:33.736 Namespace Attribute Notices: Not Supported 00:31:33.736 Firmware Activation Notices: Not Supported 00:31:33.736 ANA Change Notices: Not Supported 00:31:33.736 PLE Aggregate Log Change Notices: Not Supported 00:31:33.736 LBA Status Info Alert Notices: Not Supported 00:31:33.736 EGE Aggregate Log Change Notices: Not Supported 00:31:33.736 Normal NVM Subsystem Shutdown event: Not Supported 00:31:33.736 Zone Descriptor Change Notices: Not Supported 00:31:33.736 Discovery Log Change Notices: Supported 00:31:33.736 Controller Attributes 00:31:33.736 128-bit Host Identifier: Not Supported 00:31:33.736 Non-Operational Permissive Mode: Not Supported 00:31:33.736 NVM Sets: Not Supported 00:31:33.736 Read Recovery Levels: Not Supported 00:31:33.736 Endurance Groups: Not Supported 00:31:33.736 Predictable Latency Mode: Not Supported 00:31:33.736 Traffic Based Keep ALive: Not Supported 00:31:33.736 Namespace Granularity: Not Supported 00:31:33.736 SQ Associations: Not Supported 00:31:33.736 UUID List: Not Supported 00:31:33.736 Multi-Domain Subsystem: Not Supported 00:31:33.736 Fixed Capacity Management: Not Supported 00:31:33.736 Variable Capacity Management: Not Supported 00:31:33.736 Delete Endurance Group: Not Supported 00:31:33.736 Delete NVM Set: Not Supported 00:31:33.736 Extended LBA Formats Supported: Not Supported 00:31:33.736 Flexible Data Placement Supported: Not Supported 00:31:33.736 00:31:33.736 Controller Memory Buffer Support 00:31:33.736 ================================ 00:31:33.736 Supported: No 00:31:33.736 00:31:33.736 Persistent Memory Region Support 00:31:33.736 ================================ 00:31:33.736 Supported: No 00:31:33.736 00:31:33.736 Admin Command Set Attributes 00:31:33.736 ============================ 00:31:33.736 Security Send/Receive: Not Supported 00:31:33.736 Format NVM: Not Supported 00:31:33.736 Firmware Activate/Download: Not Supported 00:31:33.736 Namespace Management: Not Supported 00:31:33.736 Device Self-Test: Not Supported 00:31:33.736 Directives: Not Supported 00:31:33.736 NVMe-MI: Not Supported 00:31:33.736 Virtualization Management: Not Supported 00:31:33.736 Doorbell Buffer Config: Not Supported 00:31:33.736 Get LBA Status Capability: Not Supported 00:31:33.736 Command & Feature Lockdown Capability: Not Supported 00:31:33.736 Abort Command Limit: 1 00:31:33.736 Async Event Request Limit: 4 00:31:33.736 Number of Firmware Slots: N/A 00:31:33.736 Firmware Slot 1 Read-Only: N/A 00:31:33.736 Firmware Activation Without Reset: N/A 00:31:33.736 Multiple Update Detection Support: N/A 00:31:33.736 Firmware Update Granularity: No Information Provided 00:31:33.736 Per-Namespace SMART Log: No 00:31:33.736 Asymmetric Namespace Access Log Page: Not Supported 00:31:33.736 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:33.736 Command Effects Log Page: Not Supported 00:31:33.736 Get Log Page Extended Data: Supported 00:31:33.736 Telemetry Log Pages: Not Supported 00:31:33.736 Persistent Event Log Pages: Not Supported 00:31:33.736 Supported Log Pages Log Page: May Support 00:31:33.736 Commands Supported & Effects Log Page: Not Supported 00:31:33.736 Feature Identifiers & Effects Log Page:May Support 00:31:33.736 NVMe-MI Commands & Effects Log Page: May Support 00:31:33.736 Data Area 4 for Telemetry Log: Not Supported 00:31:33.736 Error Log Page Entries Supported: 128 00:31:33.736 Keep Alive: Not Supported 00:31:33.736 00:31:33.736 NVM Command Set Attributes 00:31:33.736 ========================== 00:31:33.736 Submission Queue Entry Size 00:31:33.736 Max: 1 00:31:33.736 Min: 1 00:31:33.736 Completion Queue Entry Size 00:31:33.736 Max: 1 00:31:33.736 Min: 1 00:31:33.736 Number of Namespaces: 0 00:31:33.736 Compare Command: Not Supported 00:31:33.736 Write Uncorrectable Command: Not Supported 00:31:33.736 Dataset Management Command: Not Supported 00:31:33.736 Write Zeroes Command: Not Supported 00:31:33.736 Set Features Save Field: Not Supported 00:31:33.736 Reservations: Not Supported 00:31:33.736 Timestamp: Not Supported 00:31:33.736 Copy: Not Supported 00:31:33.736 Volatile Write Cache: Not Present 00:31:33.736 Atomic Write Unit (Normal): 1 00:31:33.736 Atomic Write Unit (PFail): 1 00:31:33.736 Atomic Compare & Write Unit: 1 00:31:33.736 Fused Compare & Write: Supported 00:31:33.736 Scatter-Gather List 00:31:33.736 SGL Command Set: Supported 00:31:33.736 SGL Keyed: Supported 00:31:33.736 SGL Bit Bucket Descriptor: Not Supported 00:31:33.736 SGL Metadata Pointer: Not Supported 00:31:33.736 Oversized SGL: Not Supported 00:31:33.736 SGL Metadata Address: Not Supported 00:31:33.736 SGL Offset: Supported 00:31:33.736 Transport SGL Data Block: Not Supported 00:31:33.736 Replay Protected Memory Block: Not Supported 00:31:33.736 00:31:33.736 Firmware Slot Information 00:31:33.736 ========================= 00:31:33.736 Active slot: 0 00:31:33.736 00:31:33.736 00:31:33.736 Error Log 00:31:33.736 ========= 00:31:33.736 00:31:33.736 Active Namespaces 00:31:33.736 ================= 00:31:33.736 Discovery Log Page 00:31:33.736 ================== 00:31:33.736 Generation Counter: 2 00:31:33.736 Number of Records: 2 00:31:33.736 Record Format: 0 00:31:33.736 00:31:33.736 Discovery Log Entry 0 00:31:33.736 ---------------------- 00:31:33.736 Transport Type: 3 (TCP) 00:31:33.736 Address Family: 1 (IPv4) 00:31:33.736 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:33.736 Entry Flags: 00:31:33.736 Duplicate Returned Information: 1 00:31:33.736 Explicit Persistent Connection Support for Discovery: 1 00:31:33.736 Transport Requirements: 00:31:33.736 Secure Channel: Not Required 00:31:33.736 Port ID: 0 (0x0000) 00:31:33.736 Controller ID: 65535 (0xffff) 00:31:33.736 Admin Max SQ Size: 128 00:31:33.736 Transport Service Identifier: 4420 00:31:33.736 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:33.736 Transport Address: 10.0.0.2 00:31:33.736 Discovery Log Entry 1 00:31:33.736 ---------------------- 00:31:33.736 Transport Type: 3 (TCP) 00:31:33.736 Address Family: 1 (IPv4) 00:31:33.736 Subsystem Type: 2 (NVM Subsystem) 00:31:33.736 Entry Flags: 00:31:33.736 Duplicate Returned Information: 0 00:31:33.736 Explicit Persistent Connection Support for Discovery: 0 00:31:33.736 Transport Requirements: 00:31:33.736 Secure Channel: Not Required 00:31:33.736 Port ID: 0 (0x0000) 00:31:33.736 Controller ID: 65535 (0xffff) 00:31:33.736 Admin Max SQ Size: 128 00:31:33.736 Transport Service Identifier: 4420 00:31:33.736 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:33.736 Transport Address: 10.0.0.2 [2024-07-10 23:35:42.634490] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:31:33.736 [2024-07-10 23:35:42.634506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.736 [2024-07-10 23:35:42.634517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.736 [2024-07-10 23:35:42.634525] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500001db80 00:31:33.736 [2024-07-10 23:35:42.634532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.736 [2024-07-10 23:35:42.634539] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500001db80 00:31:33.736 [2024-07-10 23:35:42.634545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.736 [2024-07-10 23:35:42.634551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.736 [2024-07-10 23:35:42.634562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.736 [2024-07-10 23:35:42.634576] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.736 [2024-07-10 23:35:42.634583] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.634589] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.737 [2024-07-10 23:35:42.634600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.737 [2024-07-10 23:35:42.634619] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.737 [2024-07-10 23:35:42.634719] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.737 [2024-07-10 23:35:42.634729] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.737 [2024-07-10 23:35:42.634739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.634744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.737 [2024-07-10 23:35:42.634755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.634761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.634767] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.737 [2024-07-10 23:35:42.634777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.737 [2024-07-10 23:35:42.634800] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.737 [2024-07-10 23:35:42.634933] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.737 [2024-07-10 23:35:42.634941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.737 [2024-07-10 23:35:42.634946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.634951] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.737 [2024-07-10 23:35:42.634958] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:31:33.737 [2024-07-10 23:35:42.634965] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:31:33.737 [2024-07-10 23:35:42.634979] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.634985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.634991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.737 [2024-07-10 23:35:42.635003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.737 [2024-07-10 23:35:42.635017] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.737 [2024-07-10 23:35:42.635098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.737 [2024-07-10 23:35:42.635106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.737 [2024-07-10 23:35:42.635111] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635116] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.737 [2024-07-10 23:35:42.635129] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635135] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.737 [2024-07-10 23:35:42.635149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.737 [2024-07-10 23:35:42.635173] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.737 [2024-07-10 23:35:42.635258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.737 [2024-07-10 23:35:42.635266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.737 [2024-07-10 23:35:42.635271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635276] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.737 [2024-07-10 23:35:42.635288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635293] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635298] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.737 [2024-07-10 23:35:42.635307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.737 [2024-07-10 23:35:42.635320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.737 [2024-07-10 23:35:42.635402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.737 [2024-07-10 23:35:42.635411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.737 [2024-07-10 23:35:42.635415] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635420] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.737 [2024-07-10 23:35:42.635432] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635443] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.737 [2024-07-10 23:35:42.635452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.737 [2024-07-10 23:35:42.635464] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.737 [2024-07-10 23:35:42.635545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.737 [2024-07-10 23:35:42.635553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.737 [2024-07-10 23:35:42.635558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.737 [2024-07-10 23:35:42.635574] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635585] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.737 [2024-07-10 23:35:42.635594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.737 [2024-07-10 23:35:42.635607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.737 [2024-07-10 23:35:42.635683] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.737 [2024-07-10 23:35:42.635697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.737 [2024-07-10 23:35:42.635702] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635707] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.737 [2024-07-10 23:35:42.635719] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.737 [2024-07-10 23:35:42.635738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.737 [2024-07-10 23:35:42.635752] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.737 [2024-07-10 23:35:42.635828] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.737 [2024-07-10 23:35:42.635836] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.737 [2024-07-10 23:35:42.635841] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635846] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.737 [2024-07-10 23:35:42.635858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.737 [2024-07-10 23:35:42.635877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.737 [2024-07-10 23:35:42.635890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.737 [2024-07-10 23:35:42.635966] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.737 [2024-07-10 23:35:42.635974] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.737 [2024-07-10 23:35:42.635979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.635984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.737 [2024-07-10 23:35:42.635996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.636002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.636007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.737 [2024-07-10 23:35:42.636015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.737 [2024-07-10 23:35:42.636028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.737 [2024-07-10 23:35:42.636104] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.737 [2024-07-10 23:35:42.636112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.737 [2024-07-10 23:35:42.636122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.636128] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.737 [2024-07-10 23:35:42.636140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.636145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.636150] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.737 [2024-07-10 23:35:42.636166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.737 [2024-07-10 23:35:42.636180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.737 [2024-07-10 23:35:42.636269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.737 [2024-07-10 23:35:42.636277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.737 [2024-07-10 23:35:42.636282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.636287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.737 [2024-07-10 23:35:42.636299] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.636305] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.636310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.737 [2024-07-10 23:35:42.636319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.737 [2024-07-10 23:35:42.636332] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.737 [2024-07-10 23:35:42.636416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.737 [2024-07-10 23:35:42.636424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.737 [2024-07-10 23:35:42.636429] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.636434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.737 [2024-07-10 23:35:42.636446] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.636451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.737 [2024-07-10 23:35:42.636456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.737 [2024-07-10 23:35:42.636465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.737 [2024-07-10 23:35:42.636477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.738 [2024-07-10 23:35:42.636552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.738 [2024-07-10 23:35:42.636561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.738 [2024-07-10 23:35:42.636565] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.636570] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.738 [2024-07-10 23:35:42.636583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.636588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.636593] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.738 [2024-07-10 23:35:42.636602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.738 [2024-07-10 23:35:42.636615] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.738 [2024-07-10 23:35:42.636697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.738 [2024-07-10 23:35:42.636705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.738 [2024-07-10 23:35:42.636710] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.636715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.738 [2024-07-10 23:35:42.636727] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.636732] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.636737] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.738 [2024-07-10 23:35:42.636746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.738 [2024-07-10 23:35:42.636759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.738 [2024-07-10 23:35:42.636840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.738 [2024-07-10 23:35:42.636851] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.738 [2024-07-10 23:35:42.636856] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.636861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.738 [2024-07-10 23:35:42.636873] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.636879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.636883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.738 [2024-07-10 23:35:42.636892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.738 [2024-07-10 23:35:42.636904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.738 [2024-07-10 23:35:42.637008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.738 [2024-07-10 23:35:42.637016] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.738 [2024-07-10 23:35:42.637021] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.637026] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.738 [2024-07-10 23:35:42.637038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.637044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.637049] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.738 [2024-07-10 23:35:42.637057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.738 [2024-07-10 23:35:42.637071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.738 [2024-07-10 23:35:42.637149] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.738 [2024-07-10 23:35:42.637158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.738 [2024-07-10 23:35:42.641179] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.641185] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.738 [2024-07-10 23:35:42.641202] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.641208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.641213] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.738 [2024-07-10 23:35:42.641224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.738 [2024-07-10 23:35:42.641242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.738 [2024-07-10 23:35:42.641439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.738 [2024-07-10 23:35:42.641448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.738 [2024-07-10 23:35:42.641452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.641458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:33.738 [2024-07-10 23:35:42.641468] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:31:33.738 00:31:33.738 23:35:42 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:33.738 [2024-07-10 23:35:42.728591] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:31:33.738 [2024-07-10 23:35:42.728653] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575461 ] 00:31:33.738 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.738 [2024-07-10 23:35:42.773496] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:31:33.738 [2024-07-10 23:35:42.773612] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:33.738 [2024-07-10 23:35:42.773629] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:33.738 [2024-07-10 23:35:42.773647] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:33.738 [2024-07-10 23:35:42.773662] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:33.738 [2024-07-10 23:35:42.773959] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:31:33.738 [2024-07-10 23:35:42.773996] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500001db80 0 00:31:33.738 [2024-07-10 23:35:42.780174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:33.738 [2024-07-10 23:35:42.780196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:33.738 [2024-07-10 23:35:42.780204] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:33.738 [2024-07-10 23:35:42.780212] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:33.738 [2024-07-10 23:35:42.780254] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.780264] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.780272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.738 [2024-07-10 23:35:42.780290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:33.738 [2024-07-10 23:35:42.780314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.738 [2024-07-10 23:35:42.787174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.738 [2024-07-10 23:35:42.787199] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.738 [2024-07-10 23:35:42.787205] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.787213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.738 [2024-07-10 23:35:42.787233] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:33.738 [2024-07-10 23:35:42.787246] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:31:33.738 [2024-07-10 23:35:42.787255] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:31:33.738 [2024-07-10 23:35:42.787271] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.787279] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.787288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.738 [2024-07-10 23:35:42.787301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.738 [2024-07-10 23:35:42.787322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.738 [2024-07-10 23:35:42.787456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.738 [2024-07-10 23:35:42.787466] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.738 [2024-07-10 23:35:42.787472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.738 [2024-07-10 23:35:42.787479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.738 [2024-07-10 23:35:42.787487] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:31:33.739 [2024-07-10 23:35:42.787498] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:31:33.739 [2024-07-10 23:35:42.787510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.787517] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.787523] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.739 [2024-07-10 23:35:42.787537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.739 [2024-07-10 23:35:42.787553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.739 [2024-07-10 23:35:42.787668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.739 [2024-07-10 23:35:42.787678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.739 [2024-07-10 23:35:42.787683] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.787690] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.739 [2024-07-10 23:35:42.787698] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:31:33.739 [2024-07-10 23:35:42.787711] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:31:33.739 [2024-07-10 23:35:42.787721] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.787727] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.787733] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.739 [2024-07-10 23:35:42.787746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.739 [2024-07-10 23:35:42.787761] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.739 [2024-07-10 23:35:42.787842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.739 [2024-07-10 23:35:42.787853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.739 [2024-07-10 23:35:42.787858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.787863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.739 [2024-07-10 23:35:42.787871] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:33.739 [2024-07-10 23:35:42.787885] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.787892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.787898] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.739 [2024-07-10 23:35:42.787910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.739 [2024-07-10 23:35:42.787925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.739 [2024-07-10 23:35:42.788009] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.739 [2024-07-10 23:35:42.788018] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.739 [2024-07-10 23:35:42.788024] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.788030] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.739 [2024-07-10 23:35:42.788037] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:31:33.739 [2024-07-10 23:35:42.788045] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:31:33.739 [2024-07-10 23:35:42.788058] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:33.739 [2024-07-10 23:35:42.788167] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:31:33.739 [2024-07-10 23:35:42.788174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:33.739 [2024-07-10 23:35:42.788185] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.788191] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.788197] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.739 [2024-07-10 23:35:42.788211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.739 [2024-07-10 23:35:42.788228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.739 [2024-07-10 23:35:42.788345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.739 [2024-07-10 23:35:42.788355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.739 [2024-07-10 23:35:42.788360] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.788366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.739 [2024-07-10 23:35:42.788374] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:33.739 [2024-07-10 23:35:42.788387] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.788396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.788402] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.739 [2024-07-10 23:35:42.788411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.739 [2024-07-10 23:35:42.788426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.739 [2024-07-10 23:35:42.788535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.739 [2024-07-10 23:35:42.788544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.739 [2024-07-10 23:35:42.788550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.788557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.739 [2024-07-10 23:35:42.788564] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:33.739 [2024-07-10 23:35:42.788577] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:31:33.739 [2024-07-10 23:35:42.788588] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:31:33.739 [2024-07-10 23:35:42.788601] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:31:33.739 [2024-07-10 23:35:42.788616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.788623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.739 [2024-07-10 23:35:42.788634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.739 [2024-07-10 23:35:42.788651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.739 [2024-07-10 23:35:42.788785] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:33.739 [2024-07-10 23:35:42.788797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:33.739 [2024-07-10 23:35:42.788802] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.788809] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=0 00:31:33.739 [2024-07-10 23:35:42.788816] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:33.739 [2024-07-10 23:35:42.788823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.788834] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.788841] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.788858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.739 [2024-07-10 23:35:42.788868] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.739 [2024-07-10 23:35:42.788874] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.788879] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.739 [2024-07-10 23:35:42.788896] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:31:33.739 [2024-07-10 23:35:42.788905] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:31:33.739 [2024-07-10 23:35:42.788912] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:31:33.739 [2024-07-10 23:35:42.788919] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:31:33.739 [2024-07-10 23:35:42.788928] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:31:33.739 [2024-07-10 23:35:42.788936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:31:33.739 [2024-07-10 23:35:42.788948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:31:33.739 [2024-07-10 23:35:42.788960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.788966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.788972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.739 [2024-07-10 23:35:42.788984] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:33.739 [2024-07-10 23:35:42.789004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.739 [2024-07-10 23:35:42.789099] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.739 [2024-07-10 23:35:42.789109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.739 [2024-07-10 23:35:42.789114] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.789120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.739 [2024-07-10 23:35:42.789132] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.789142] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.789148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:33.739 [2024-07-10 23:35:42.789167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.739 [2024-07-10 23:35:42.789177] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.789182] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.789187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500001db80) 00:31:33.739 [2024-07-10 23:35:42.789196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.739 [2024-07-10 23:35:42.789203] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.789209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.789214] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500001db80) 00:31:33.739 [2024-07-10 23:35:42.789222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.739 [2024-07-10 23:35:42.789231] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.789237] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.739 [2024-07-10 23:35:42.789242] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:33.739 [2024-07-10 23:35:42.789253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.740 [2024-07-10 23:35:42.789260] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.789274] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.789286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.789292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:33.740 [2024-07-10 23:35:42.789302] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.740 [2024-07-10 23:35:42.789321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:33.740 [2024-07-10 23:35:42.789330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:33.740 [2024-07-10 23:35:42.789337] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:33.740 [2024-07-10 23:35:42.789343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:33.740 [2024-07-10 23:35:42.789349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:33.740 [2024-07-10 23:35:42.789470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.740 [2024-07-10 23:35:42.789480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.740 [2024-07-10 23:35:42.789485] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.789491] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:33.740 [2024-07-10 23:35:42.789498] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:31:33.740 [2024-07-10 23:35:42.789506] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.789517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.789531] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.789540] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.789546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.789552] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:33.740 [2024-07-10 23:35:42.789562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:33.740 [2024-07-10 23:35:42.789577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:33.740 [2024-07-10 23:35:42.789681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.740 [2024-07-10 23:35:42.789690] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.740 [2024-07-10 23:35:42.789695] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.789700] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:33.740 [2024-07-10 23:35:42.789775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.789791] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.789804] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.789812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:33.740 [2024-07-10 23:35:42.789823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.740 [2024-07-10 23:35:42.789843] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:33.740 [2024-07-10 23:35:42.789968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:33.740 [2024-07-10 23:35:42.789977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:33.740 [2024-07-10 23:35:42.789982] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.789988] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:33.740 [2024-07-10 23:35:42.789994] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:33.740 [2024-07-10 23:35:42.790000] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790016] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790023] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.740 [2024-07-10 23:35:42.790090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.740 [2024-07-10 23:35:42.790096] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790101] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:33.740 [2024-07-10 23:35:42.790130] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:31:33.740 [2024-07-10 23:35:42.790146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.790165] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.790180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:33.740 [2024-07-10 23:35:42.790198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.740 [2024-07-10 23:35:42.790214] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:33.740 [2024-07-10 23:35:42.790327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:33.740 [2024-07-10 23:35:42.790337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:33.740 [2024-07-10 23:35:42.790342] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790348] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:33.740 [2024-07-10 23:35:42.790354] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:33.740 [2024-07-10 23:35:42.790360] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790390] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790397] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790463] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.740 [2024-07-10 23:35:42.790472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.740 [2024-07-10 23:35:42.790481] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:33.740 [2024-07-10 23:35:42.790507] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.790527] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.790543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:33.740 [2024-07-10 23:35:42.790562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.740 [2024-07-10 23:35:42.790578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:33.740 [2024-07-10 23:35:42.790679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:33.740 [2024-07-10 23:35:42.790688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:33.740 [2024-07-10 23:35:42.790693] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790699] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:33.740 [2024-07-10 23:35:42.790705] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:33.740 [2024-07-10 23:35:42.790711] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790719] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790725] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.740 [2024-07-10 23:35:42.790769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.740 [2024-07-10 23:35:42.790773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:33.740 [2024-07-10 23:35:42.790795] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.790807] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.790819] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.790827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.790835] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.790843] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.790850] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:31:33.740 [2024-07-10 23:35:42.790858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:31:33.740 [2024-07-10 23:35:42.790866] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:31:33.740 [2024-07-10 23:35:42.790897] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790904] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:33.740 [2024-07-10 23:35:42.790915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.740 [2024-07-10 23:35:42.790924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.790938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:33.740 [2024-07-10 23:35:42.790948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.740 [2024-07-10 23:35:42.790966] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:33.740 [2024-07-10 23:35:42.790974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:33.740 [2024-07-10 23:35:42.791126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.740 [2024-07-10 23:35:42.791135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.740 [2024-07-10 23:35:42.791141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.740 [2024-07-10 23:35:42.791150] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:33.740 [2024-07-10 23:35:42.795170] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.741 [2024-07-10 23:35:42.795188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.741 [2024-07-10 23:35:42.795195] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.795201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:33.741 [2024-07-10 23:35:42.795219] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.795226] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:33.741 [2024-07-10 23:35:42.795239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.741 [2024-07-10 23:35:42.795259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:33.741 [2024-07-10 23:35:42.795395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.741 [2024-07-10 23:35:42.795405] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.741 [2024-07-10 23:35:42.795410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.795415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:33.741 [2024-07-10 23:35:42.795427] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.795434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:33.741 [2024-07-10 23:35:42.795446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.741 [2024-07-10 23:35:42.795460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:33.741 [2024-07-10 23:35:42.795596] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.741 [2024-07-10 23:35:42.795606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.741 [2024-07-10 23:35:42.795611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.795616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:33.741 [2024-07-10 23:35:42.795627] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.795634] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:33.741 [2024-07-10 23:35:42.795644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.741 [2024-07-10 23:35:42.795659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:33.741 [2024-07-10 23:35:42.795738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.741 [2024-07-10 23:35:42.795747] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.741 [2024-07-10 23:35:42.795756] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.795762] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:33.741 [2024-07-10 23:35:42.795783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.795791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:33.741 [2024-07-10 23:35:42.795802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.741 [2024-07-10 23:35:42.795812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.795818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:33.741 [2024-07-10 23:35:42.795828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.741 [2024-07-10 23:35:42.795838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.795844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500001db80) 00:31:33.741 [2024-07-10 23:35:42.795855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.741 [2024-07-10 23:35:42.795868] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.795876] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500001db80) 00:31:33.741 [2024-07-10 23:35:42.795885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.741 [2024-07-10 23:35:42.795902] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:33.741 [2024-07-10 23:35:42.795910] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:33.741 [2024-07-10 23:35:42.795917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:31:33.741 [2024-07-10 23:35:42.795922] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:33.741 [2024-07-10 23:35:42.796097] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:33.741 [2024-07-10 23:35:42.796108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:33.741 [2024-07-10 23:35:42.796113] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796119] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=8192, cccid=5 00:31:33.741 [2024-07-10 23:35:42.796126] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500001db80): expected_datao=0, payload_size=8192 00:31:33.741 [2024-07-10 23:35:42.796133] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796230] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796238] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:33.741 [2024-07-10 23:35:42.796256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:33.741 [2024-07-10 23:35:42.796260] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796266] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=512, cccid=4 00:31:33.741 [2024-07-10 23:35:42.796272] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=512 00:31:33.741 [2024-07-10 23:35:42.796277] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796288] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796296] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:33.741 [2024-07-10 23:35:42.796310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:33.741 [2024-07-10 23:35:42.796315] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796321] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=512, cccid=6 00:31:33.741 [2024-07-10 23:35:42.796327] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500001db80): expected_datao=0, payload_size=512 00:31:33.741 [2024-07-10 23:35:42.796333] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796341] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796346] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796353] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:33.741 [2024-07-10 23:35:42.796360] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:33.741 [2024-07-10 23:35:42.796364] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796370] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=7 00:31:33.741 [2024-07-10 23:35:42.796375] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:33.741 [2024-07-10 23:35:42.796381] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796391] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796397] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.741 [2024-07-10 23:35:42.796414] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.741 [2024-07-10 23:35:42.796419] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:33.741 [2024-07-10 23:35:42.796452] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.741 [2024-07-10 23:35:42.796459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.741 [2024-07-10 23:35:42.796464] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796470] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:33.741 [2024-07-10 23:35:42.796483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.741 [2024-07-10 23:35:42.796492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.741 [2024-07-10 23:35:42.796502] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796507] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500001db80 00:31:33.741 [2024-07-10 23:35:42.796517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.741 [2024-07-10 23:35:42.796525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.741 [2024-07-10 23:35:42.796530] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.741 [2024-07-10 23:35:42.796535] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500001db80 00:31:33.741 ===================================================== 00:31:33.741 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:33.741 ===================================================== 00:31:33.741 Controller Capabilities/Features 00:31:33.741 ================================ 00:31:33.741 Vendor ID: 8086 00:31:33.741 Subsystem Vendor ID: 8086 00:31:33.741 Serial Number: SPDK00000000000001 00:31:33.741 Model Number: SPDK bdev Controller 00:31:33.741 Firmware Version: 24.09 00:31:33.741 Recommended Arb Burst: 6 00:31:33.741 IEEE OUI Identifier: e4 d2 5c 00:31:33.741 Multi-path I/O 00:31:33.741 May have multiple subsystem ports: Yes 00:31:33.741 May have multiple controllers: Yes 00:31:33.741 Associated with SR-IOV VF: No 00:31:33.741 Max Data Transfer Size: 131072 00:31:33.741 Max Number of Namespaces: 32 00:31:33.741 Max Number of I/O Queues: 127 00:31:33.741 NVMe Specification Version (VS): 1.3 00:31:33.741 NVMe Specification Version (Identify): 1.3 00:31:33.741 Maximum Queue Entries: 128 00:31:33.741 Contiguous Queues Required: Yes 00:31:33.741 Arbitration Mechanisms Supported 00:31:33.741 Weighted Round Robin: Not Supported 00:31:33.741 Vendor Specific: Not Supported 00:31:33.741 Reset Timeout: 15000 ms 00:31:33.741 Doorbell Stride: 4 bytes 00:31:33.741 NVM Subsystem Reset: Not Supported 00:31:33.741 Command Sets Supported 00:31:33.741 NVM Command Set: Supported 00:31:33.741 Boot Partition: Not Supported 00:31:33.741 Memory Page Size Minimum: 4096 bytes 00:31:33.741 Memory Page Size Maximum: 4096 bytes 00:31:33.741 Persistent Memory Region: Not Supported 00:31:33.741 Optional Asynchronous Events Supported 00:31:33.741 Namespace Attribute Notices: Supported 00:31:33.741 Firmware Activation Notices: Not Supported 00:31:33.742 ANA Change Notices: Not Supported 00:31:33.742 PLE Aggregate Log Change Notices: Not Supported 00:31:33.742 LBA Status Info Alert Notices: Not Supported 00:31:33.742 EGE Aggregate Log Change Notices: Not Supported 00:31:33.742 Normal NVM Subsystem Shutdown event: Not Supported 00:31:33.742 Zone Descriptor Change Notices: Not Supported 00:31:33.742 Discovery Log Change Notices: Not Supported 00:31:33.742 Controller Attributes 00:31:33.742 128-bit Host Identifier: Supported 00:31:33.742 Non-Operational Permissive Mode: Not Supported 00:31:33.742 NVM Sets: Not Supported 00:31:33.742 Read Recovery Levels: Not Supported 00:31:33.742 Endurance Groups: Not Supported 00:31:33.742 Predictable Latency Mode: Not Supported 00:31:33.742 Traffic Based Keep ALive: Not Supported 00:31:33.742 Namespace Granularity: Not Supported 00:31:33.742 SQ Associations: Not Supported 00:31:33.742 UUID List: Not Supported 00:31:33.742 Multi-Domain Subsystem: Not Supported 00:31:33.742 Fixed Capacity Management: Not Supported 00:31:33.742 Variable Capacity Management: Not Supported 00:31:33.742 Delete Endurance Group: Not Supported 00:31:33.742 Delete NVM Set: Not Supported 00:31:33.742 Extended LBA Formats Supported: Not Supported 00:31:33.742 Flexible Data Placement Supported: Not Supported 00:31:33.742 00:31:33.742 Controller Memory Buffer Support 00:31:33.742 ================================ 00:31:33.742 Supported: No 00:31:33.742 00:31:33.742 Persistent Memory Region Support 00:31:33.742 ================================ 00:31:33.742 Supported: No 00:31:33.742 00:31:33.742 Admin Command Set Attributes 00:31:33.742 ============================ 00:31:33.742 Security Send/Receive: Not Supported 00:31:33.742 Format NVM: Not Supported 00:31:33.742 Firmware Activate/Download: Not Supported 00:31:33.742 Namespace Management: Not Supported 00:31:33.742 Device Self-Test: Not Supported 00:31:33.742 Directives: Not Supported 00:31:33.742 NVMe-MI: Not Supported 00:31:33.742 Virtualization Management: Not Supported 00:31:33.742 Doorbell Buffer Config: Not Supported 00:31:33.742 Get LBA Status Capability: Not Supported 00:31:33.742 Command & Feature Lockdown Capability: Not Supported 00:31:33.742 Abort Command Limit: 4 00:31:33.742 Async Event Request Limit: 4 00:31:33.742 Number of Firmware Slots: N/A 00:31:33.742 Firmware Slot 1 Read-Only: N/A 00:31:33.742 Firmware Activation Without Reset: N/A 00:31:33.742 Multiple Update Detection Support: N/A 00:31:33.742 Firmware Update Granularity: No Information Provided 00:31:33.742 Per-Namespace SMART Log: No 00:31:33.742 Asymmetric Namespace Access Log Page: Not Supported 00:31:33.742 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:33.742 Command Effects Log Page: Supported 00:31:33.742 Get Log Page Extended Data: Supported 00:31:33.742 Telemetry Log Pages: Not Supported 00:31:33.742 Persistent Event Log Pages: Not Supported 00:31:33.742 Supported Log Pages Log Page: May Support 00:31:33.742 Commands Supported & Effects Log Page: Not Supported 00:31:33.742 Feature Identifiers & Effects Log Page:May Support 00:31:33.742 NVMe-MI Commands & Effects Log Page: May Support 00:31:33.742 Data Area 4 for Telemetry Log: Not Supported 00:31:33.742 Error Log Page Entries Supported: 128 00:31:33.742 Keep Alive: Supported 00:31:33.742 Keep Alive Granularity: 10000 ms 00:31:33.742 00:31:33.742 NVM Command Set Attributes 00:31:33.742 ========================== 00:31:33.742 Submission Queue Entry Size 00:31:33.742 Max: 64 00:31:33.742 Min: 64 00:31:33.742 Completion Queue Entry Size 00:31:33.742 Max: 16 00:31:33.742 Min: 16 00:31:33.742 Number of Namespaces: 32 00:31:33.742 Compare Command: Supported 00:31:33.742 Write Uncorrectable Command: Not Supported 00:31:33.742 Dataset Management Command: Supported 00:31:33.742 Write Zeroes Command: Supported 00:31:33.742 Set Features Save Field: Not Supported 00:31:33.742 Reservations: Supported 00:31:33.742 Timestamp: Not Supported 00:31:33.742 Copy: Supported 00:31:33.742 Volatile Write Cache: Present 00:31:33.742 Atomic Write Unit (Normal): 1 00:31:33.742 Atomic Write Unit (PFail): 1 00:31:33.742 Atomic Compare & Write Unit: 1 00:31:33.742 Fused Compare & Write: Supported 00:31:33.742 Scatter-Gather List 00:31:33.742 SGL Command Set: Supported 00:31:33.742 SGL Keyed: Supported 00:31:33.742 SGL Bit Bucket Descriptor: Not Supported 00:31:33.742 SGL Metadata Pointer: Not Supported 00:31:33.742 Oversized SGL: Not Supported 00:31:33.742 SGL Metadata Address: Not Supported 00:31:33.742 SGL Offset: Supported 00:31:33.742 Transport SGL Data Block: Not Supported 00:31:33.742 Replay Protected Memory Block: Not Supported 00:31:33.742 00:31:33.742 Firmware Slot Information 00:31:33.742 ========================= 00:31:33.742 Active slot: 1 00:31:33.742 Slot 1 Firmware Revision: 24.09 00:31:33.742 00:31:33.742 00:31:33.742 Commands Supported and Effects 00:31:33.742 ============================== 00:31:33.742 Admin Commands 00:31:33.742 -------------- 00:31:33.742 Get Log Page (02h): Supported 00:31:33.742 Identify (06h): Supported 00:31:33.742 Abort (08h): Supported 00:31:33.742 Set Features (09h): Supported 00:31:33.742 Get Features (0Ah): Supported 00:31:33.742 Asynchronous Event Request (0Ch): Supported 00:31:33.742 Keep Alive (18h): Supported 00:31:33.742 I/O Commands 00:31:33.742 ------------ 00:31:33.742 Flush (00h): Supported LBA-Change 00:31:33.742 Write (01h): Supported LBA-Change 00:31:33.742 Read (02h): Supported 00:31:33.742 Compare (05h): Supported 00:31:33.742 Write Zeroes (08h): Supported LBA-Change 00:31:33.742 Dataset Management (09h): Supported LBA-Change 00:31:33.742 Copy (19h): Supported LBA-Change 00:31:33.742 00:31:33.742 Error Log 00:31:33.742 ========= 00:31:33.742 00:31:33.742 Arbitration 00:31:33.742 =========== 00:31:33.742 Arbitration Burst: 1 00:31:33.742 00:31:33.742 Power Management 00:31:33.742 ================ 00:31:33.742 Number of Power States: 1 00:31:33.742 Current Power State: Power State #0 00:31:33.742 Power State #0: 00:31:33.742 Max Power: 0.00 W 00:31:33.742 Non-Operational State: Operational 00:31:33.742 Entry Latency: Not Reported 00:31:33.742 Exit Latency: Not Reported 00:31:33.742 Relative Read Throughput: 0 00:31:33.742 Relative Read Latency: 0 00:31:33.742 Relative Write Throughput: 0 00:31:33.742 Relative Write Latency: 0 00:31:33.742 Idle Power: Not Reported 00:31:33.742 Active Power: Not Reported 00:31:33.742 Non-Operational Permissive Mode: Not Supported 00:31:33.742 00:31:33.742 Health Information 00:31:33.742 ================== 00:31:33.742 Critical Warnings: 00:31:33.742 Available Spare Space: OK 00:31:33.742 Temperature: OK 00:31:33.742 Device Reliability: OK 00:31:33.742 Read Only: No 00:31:33.742 Volatile Memory Backup: OK 00:31:33.742 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:33.742 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:33.742 Available Spare: 0% 00:31:33.742 Available Spare Threshold: 0% 00:31:33.742 Life Percentage Used:[2024-07-10 23:35:42.796682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:33.742 [2024-07-10 23:35:42.796690] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500001db80) 00:31:33.742 [2024-07-10 23:35:42.796704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:33.742 [2024-07-10 23:35:42.796726] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:33.742 [2024-07-10 23:35:42.796821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:33.742 [2024-07-10 23:35:42.796833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:33.742 [2024-07-10 23:35:42.796838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:33.742 [2024-07-10 23:35:42.796844] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500001db80 00:31:33.742 [2024-07-10 23:35:42.796893] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:31:33.742 [2024-07-10 23:35:42.796907] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:33.742 [2024-07-10 23:35:42.796920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.743 [2024-07-10 23:35:42.796927] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500001db80 00:31:33.743 [2024-07-10 23:35:42.796936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.743 [2024-07-10 23:35:42.796943] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500001db80 00:31:33.743 [2024-07-10 23:35:42.796951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.743 [2024-07-10 23:35:42.796957] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:34.005 [2024-07-10 23:35:42.796964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:34.005 [2024-07-10 23:35:42.796978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.796986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.796993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:34.005 [2024-07-10 23:35:42.797006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.005 [2024-07-10 23:35:42.797025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:34.005 [2024-07-10 23:35:42.797157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:34.005 [2024-07-10 23:35:42.797175] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:34.005 [2024-07-10 23:35:42.797182] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.797189] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:34.005 [2024-07-10 23:35:42.797203] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.797209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.797215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:34.005 [2024-07-10 23:35:42.797226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.005 [2024-07-10 23:35:42.797246] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:34.005 [2024-07-10 23:35:42.797382] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:34.005 [2024-07-10 23:35:42.797393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:34.005 [2024-07-10 23:35:42.797398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.797403] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:34.005 [2024-07-10 23:35:42.797411] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:31:34.005 [2024-07-10 23:35:42.797418] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:31:34.005 [2024-07-10 23:35:42.797431] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.797440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.797448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:34.005 [2024-07-10 23:35:42.797458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.005 [2024-07-10 23:35:42.797473] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:34.005 [2024-07-10 23:35:42.797602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:34.005 [2024-07-10 23:35:42.797611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:34.005 [2024-07-10 23:35:42.797620] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.797625] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:34.005 [2024-07-10 23:35:42.797639] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.797646] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.797651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:34.005 [2024-07-10 23:35:42.797660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.005 [2024-07-10 23:35:42.797674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:34.005 [2024-07-10 23:35:42.797776] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:34.005 [2024-07-10 23:35:42.797785] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:34.005 [2024-07-10 23:35:42.797790] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.797795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:34.005 [2024-07-10 23:35:42.797810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.797817] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.797822] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:34.005 [2024-07-10 23:35:42.797831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.005 [2024-07-10 23:35:42.797845] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:34.005 [2024-07-10 23:35:42.797955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:34.005 [2024-07-10 23:35:42.797967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:34.005 [2024-07-10 23:35:42.797972] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.797977] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:34.005 [2024-07-10 23:35:42.797990] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.797996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.798001] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:34.005 [2024-07-10 23:35:42.798010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.005 [2024-07-10 23:35:42.798025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:34.005 [2024-07-10 23:35:42.798105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:34.005 [2024-07-10 23:35:42.798114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:34.005 [2024-07-10 23:35:42.798119] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.798124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:34.005 [2024-07-10 23:35:42.798139] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.798145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:34.005 [2024-07-10 23:35:42.798153] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:34.005 [2024-07-10 23:35:42.798169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.005 [2024-07-10 23:35:42.798184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:34.006 [2024-07-10 23:35:42.798338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:34.006 [2024-07-10 23:35:42.798349] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:34.006 [2024-07-10 23:35:42.798354] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.798359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:34.006 [2024-07-10 23:35:42.798372] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.798378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.798383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:34.006 [2024-07-10 23:35:42.798395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.006 [2024-07-10 23:35:42.798409] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:34.006 [2024-07-10 23:35:42.798493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:34.006 [2024-07-10 23:35:42.798502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:34.006 [2024-07-10 23:35:42.798507] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.798512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:34.006 [2024-07-10 23:35:42.798526] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.798532] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.798537] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:34.006 [2024-07-10 23:35:42.798546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.006 [2024-07-10 23:35:42.798563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:34.006 [2024-07-10 23:35:42.798665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:34.006 [2024-07-10 23:35:42.798676] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:34.006 [2024-07-10 23:35:42.798681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.798686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:34.006 [2024-07-10 23:35:42.798699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.798705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.798711] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:34.006 [2024-07-10 23:35:42.798720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.006 [2024-07-10 23:35:42.798733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:34.006 [2024-07-10 23:35:42.798841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:34.006 [2024-07-10 23:35:42.798850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:34.006 [2024-07-10 23:35:42.798854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.798862] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:34.006 [2024-07-10 23:35:42.798875] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.798881] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.798888] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:34.006 [2024-07-10 23:35:42.798897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.006 [2024-07-10 23:35:42.798911] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:34.006 [2024-07-10 23:35:42.799019] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:34.006 [2024-07-10 23:35:42.799028] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:34.006 [2024-07-10 23:35:42.799033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.799038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:34.006 [2024-07-10 23:35:42.799053] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.799060] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.799067] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:34.006 [2024-07-10 23:35:42.799086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.006 [2024-07-10 23:35:42.799101] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:34.006 [2024-07-10 23:35:42.803174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:34.006 [2024-07-10 23:35:42.803191] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:34.006 [2024-07-10 23:35:42.803200] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.803207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:34.006 [2024-07-10 23:35:42.803225] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.803232] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.803237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:34.006 [2024-07-10 23:35:42.803248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:34.006 [2024-07-10 23:35:42.803267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:34.006 [2024-07-10 23:35:42.803389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:34.006 [2024-07-10 23:35:42.803400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:34.006 [2024-07-10 23:35:42.803405] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:34.006 [2024-07-10 23:35:42.803411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:34.006 [2024-07-10 23:35:42.803424] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:31:34.006 0% 00:31:34.006 Data Units Read: 0 00:31:34.006 Data Units Written: 0 00:31:34.006 Host Read Commands: 0 00:31:34.006 Host Write Commands: 0 00:31:34.006 Controller Busy Time: 0 minutes 00:31:34.006 Power Cycles: 0 00:31:34.006 Power On Hours: 0 hours 00:31:34.006 Unsafe Shutdowns: 0 00:31:34.006 Unrecoverable Media Errors: 0 00:31:34.006 Lifetime Error Log Entries: 0 00:31:34.006 Warning Temperature Time: 0 minutes 00:31:34.006 Critical Temperature Time: 0 minutes 00:31:34.006 00:31:34.006 Number of Queues 00:31:34.006 ================ 00:31:34.006 Number of I/O Submission Queues: 127 00:31:34.006 Number of I/O Completion Queues: 127 00:31:34.006 00:31:34.006 Active Namespaces 00:31:34.006 ================= 00:31:34.006 Namespace ID:1 00:31:34.006 Error Recovery Timeout: Unlimited 00:31:34.006 Command Set Identifier: NVM (00h) 00:31:34.006 Deallocate: Supported 00:31:34.006 Deallocated/Unwritten Error: Not Supported 00:31:34.006 Deallocated Read Value: Unknown 00:31:34.006 Deallocate in Write Zeroes: Not Supported 00:31:34.006 Deallocated Guard Field: 0xFFFF 00:31:34.006 Flush: Supported 00:31:34.006 Reservation: Supported 00:31:34.006 Namespace Sharing Capabilities: Multiple Controllers 00:31:34.006 Size (in LBAs): 131072 (0GiB) 00:31:34.006 Capacity (in LBAs): 131072 (0GiB) 00:31:34.006 Utilization (in LBAs): 131072 (0GiB) 00:31:34.006 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:34.006 EUI64: ABCDEF0123456789 00:31:34.006 UUID: b9396b93-4857-41aa-ae58-1a98377ab5c5 00:31:34.006 Thin Provisioning: Not Supported 00:31:34.006 Per-NS Atomic Units: Yes 00:31:34.006 Atomic Boundary Size (Normal): 0 00:31:34.006 Atomic Boundary Size (PFail): 0 00:31:34.006 Atomic Boundary Offset: 0 00:31:34.006 Maximum Single Source Range Length: 65535 00:31:34.006 Maximum Copy Length: 65535 00:31:34.006 Maximum Source Range Count: 1 00:31:34.006 NGUID/EUI64 Never Reused: No 00:31:34.006 Namespace Write Protected: No 00:31:34.006 Number of LBA Formats: 1 00:31:34.006 Current LBA Format: LBA Format #00 00:31:34.006 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:34.006 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:34.006 rmmod nvme_tcp 00:31:34.006 rmmod nvme_fabrics 00:31:34.006 rmmod nvme_keyring 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2575215 ']' 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2575215 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2575215 ']' 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2575215 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2575215 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2575215' 00:31:34.006 killing process with pid 2575215 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2575215 00:31:34.006 23:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2575215 00:31:35.447 23:35:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:35.447 23:35:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:35.447 23:35:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:35.447 23:35:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:35.447 23:35:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:35.447 23:35:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.447 23:35:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:35.447 23:35:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.984 23:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:37.984 00:31:37.984 real 0m10.390s 00:31:37.984 user 0m10.721s 00:31:37.984 sys 0m4.459s 00:31:37.984 23:35:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:37.984 23:35:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:37.984 ************************************ 00:31:37.984 END TEST nvmf_identify 00:31:37.984 ************************************ 00:31:37.984 23:35:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:37.984 23:35:46 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:37.984 23:35:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:37.984 23:35:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:37.984 23:35:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:37.984 ************************************ 00:31:37.984 START TEST nvmf_perf 00:31:37.984 ************************************ 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:37.984 * Looking for test storage... 00:31:37.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:31:37.984 23:35:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:43.255 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:43.255 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:31:43.255 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:43.255 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:43.255 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:43.255 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:43.255 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:43.255 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:31:43.255 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:43.255 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:31:43.255 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:31:43.255 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:31:43.255 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:31:43.255 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:43.256 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:43.256 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:43.256 Found net devices under 0000:86:00.0: cvl_0_0 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:43.256 Found net devices under 0000:86:00.1: cvl_0_1 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:43.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:43.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:31:43.256 00:31:43.256 --- 10.0.0.2 ping statistics --- 00:31:43.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.256 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:43.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:43.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:31:43.256 00:31:43.256 --- 10.0.0.1 ping statistics --- 00:31:43.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.256 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2579173 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2579173 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2579173 ']' 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:43.256 23:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:43.256 [2024-07-10 23:35:51.863373] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:31:43.256 [2024-07-10 23:35:51.863473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:43.256 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.256 [2024-07-10 23:35:51.972184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:43.256 [2024-07-10 23:35:52.189027] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:43.256 [2024-07-10 23:35:52.189072] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:43.256 [2024-07-10 23:35:52.189084] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:43.256 [2024-07-10 23:35:52.189093] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:43.256 [2024-07-10 23:35:52.189102] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:43.256 [2024-07-10 23:35:52.189183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.257 [2024-07-10 23:35:52.189250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:43.257 [2024-07-10 23:35:52.189342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.257 [2024-07-10 23:35:52.189353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:43.825 23:35:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:43.825 23:35:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:31:43.825 23:35:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:43.825 23:35:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:43.825 23:35:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:43.825 23:35:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:43.825 23:35:52 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:43.825 23:35:52 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:47.112 23:35:55 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:47.112 23:35:55 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:47.112 23:35:55 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:31:47.112 23:35:55 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:47.371 23:35:56 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:47.371 23:35:56 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:31:47.371 23:35:56 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:47.371 23:35:56 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:47.371 23:35:56 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:47.371 [2024-07-10 23:35:56.363510] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:47.371 23:35:56 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:47.630 23:35:56 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:47.630 23:35:56 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:47.889 23:35:56 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:47.889 23:35:56 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:48.148 23:35:56 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:48.148 [2024-07-10 23:35:57.121519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:48.148 23:35:57 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:48.407 23:35:57 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:31:48.407 23:35:57 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:31:48.407 23:35:57 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:48.407 23:35:57 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:31:49.787 Initializing NVMe Controllers 00:31:49.787 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:31:49.787 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:31:49.787 Initialization complete. Launching workers. 00:31:49.787 ======================================================== 00:31:49.787 Latency(us) 00:31:49.787 Device Information : IOPS MiB/s Average min max 00:31:49.787 PCIE (0000:5e:00.0) NSID 1 from core 0: 89084.15 347.98 358.64 51.29 4297.33 00:31:49.787 ======================================================== 00:31:49.787 Total : 89084.15 347.98 358.64 51.29 4297.33 00:31:49.787 00:31:49.787 23:35:58 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:50.045 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.420 Initializing NVMe Controllers 00:31:51.420 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:51.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:51.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:51.421 Initialization complete. Launching workers. 00:31:51.421 ======================================================== 00:31:51.421 Latency(us) 00:31:51.421 Device Information : IOPS MiB/s Average min max 00:31:51.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 130.00 0.51 7742.76 128.41 44802.60 00:31:51.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 36.00 0.14 29010.46 7951.09 47898.72 00:31:51.421 ======================================================== 00:31:51.421 Total : 166.00 0.65 12355.03 128.41 47898.72 00:31:51.421 00:31:51.421 23:36:00 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:51.421 EAL: No free 2048 kB hugepages reported on node 1 00:31:52.795 Initializing NVMe Controllers 00:31:52.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:52.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:52.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:52.795 Initialization complete. Launching workers. 00:31:52.795 ======================================================== 00:31:52.795 Latency(us) 00:31:52.795 Device Information : IOPS MiB/s Average min max 00:31:52.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9306.16 36.35 3438.87 547.93 8177.06 00:31:52.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3825.67 14.94 8397.16 4918.33 47624.73 00:31:52.795 ======================================================== 00:31:52.795 Total : 13131.83 51.30 4883.36 547.93 47624.73 00:31:52.795 00:31:52.795 23:36:01 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:52.795 23:36:01 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:52.795 23:36:01 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:52.795 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.081 Initializing NVMe Controllers 00:31:56.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:56.081 Controller IO queue size 128, less than required. 00:31:56.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:56.081 Controller IO queue size 128, less than required. 00:31:56.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:56.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:56.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:56.081 Initialization complete. Launching workers. 00:31:56.081 ======================================================== 00:31:56.081 Latency(us) 00:31:56.081 Device Information : IOPS MiB/s Average min max 00:31:56.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1252.21 313.05 107028.21 57186.36 328086.38 00:31:56.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 561.92 140.48 243618.92 130813.43 551701.85 00:31:56.081 ======================================================== 00:31:56.081 Total : 1814.13 453.53 149336.73 57186.36 551701.85 00:31:56.081 00:31:56.081 23:36:04 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:56.081 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.081 No valid NVMe controllers or AIO or URING devices found 00:31:56.081 Initializing NVMe Controllers 00:31:56.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:56.081 Controller IO queue size 128, less than required. 00:31:56.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:56.081 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:56.081 Controller IO queue size 128, less than required. 00:31:56.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:56.081 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:56.081 WARNING: Some requested NVMe devices were skipped 00:31:56.081 23:36:04 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:56.081 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.374 Initializing NVMe Controllers 00:31:59.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:59.374 Controller IO queue size 128, less than required. 00:31:59.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:59.374 Controller IO queue size 128, less than required. 00:31:59.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:59.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:59.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:59.375 Initialization complete. Launching workers. 00:31:59.375 00:31:59.375 ==================== 00:31:59.375 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:59.375 TCP transport: 00:31:59.375 polls: 17605 00:31:59.375 idle_polls: 6675 00:31:59.375 sock_completions: 10930 00:31:59.375 nvme_completions: 4011 00:31:59.375 submitted_requests: 6072 00:31:59.375 queued_requests: 1 00:31:59.375 00:31:59.375 ==================== 00:31:59.375 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:59.375 TCP transport: 00:31:59.375 polls: 17186 00:31:59.375 idle_polls: 5937 00:31:59.375 sock_completions: 11249 00:31:59.375 nvme_completions: 5393 00:31:59.375 submitted_requests: 8056 00:31:59.375 queued_requests: 1 00:31:59.375 ======================================================== 00:31:59.375 Latency(us) 00:31:59.375 Device Information : IOPS MiB/s Average min max 00:31:59.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1002.49 250.62 140396.92 70570.73 495711.08 00:31:59.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1347.99 337.00 94932.58 49170.77 327586.50 00:31:59.375 ======================================================== 00:31:59.375 Total : 2350.48 587.62 114323.35 49170.77 495711.08 00:31:59.375 00:31:59.375 23:36:08 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:59.375 23:36:08 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:59.375 23:36:08 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:59.375 23:36:08 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:31:59.375 23:36:08 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:32:02.660 23:36:11 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=f9705e65-6b75-4c3c-ba4c-973e208a7660 00:32:02.660 23:36:11 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb f9705e65-6b75-4c3c-ba4c-973e208a7660 00:32:02.660 23:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=f9705e65-6b75-4c3c-ba4c-973e208a7660 00:32:02.660 23:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:02.660 23:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:32:02.660 23:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:32:02.660 23:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:02.919 23:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:02.919 { 00:32:02.919 "uuid": "f9705e65-6b75-4c3c-ba4c-973e208a7660", 00:32:02.919 "name": "lvs_0", 00:32:02.919 "base_bdev": "Nvme0n1", 00:32:02.919 "total_data_clusters": 238234, 00:32:02.919 "free_clusters": 238234, 00:32:02.919 "block_size": 512, 00:32:02.919 "cluster_size": 4194304 00:32:02.919 } 00:32:02.919 ]' 00:32:02.919 23:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="f9705e65-6b75-4c3c-ba4c-973e208a7660") .free_clusters' 00:32:02.919 23:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:32:02.919 23:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="f9705e65-6b75-4c3c-ba4c-973e208a7660") .cluster_size' 00:32:02.919 23:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:02.919 23:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:32:02.919 23:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:32:02.919 952936 00:32:02.919 23:36:11 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:32:02.919 23:36:11 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:32:02.919 23:36:11 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f9705e65-6b75-4c3c-ba4c-973e208a7660 lbd_0 20480 00:32:03.484 23:36:12 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=c796f157-d440-44b9-97b9-ef7fdace97a8 00:32:03.485 23:36:12 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore c796f157-d440-44b9-97b9-ef7fdace97a8 lvs_n_0 00:32:04.131 23:36:12 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=5c50f228-fd28-4460-9b53-2735e8d53aad 00:32:04.131 23:36:12 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 5c50f228-fd28-4460-9b53-2735e8d53aad 00:32:04.131 23:36:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=5c50f228-fd28-4460-9b53-2735e8d53aad 00:32:04.131 23:36:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:04.131 23:36:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:32:04.131 23:36:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:32:04.131 23:36:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:04.131 23:36:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:04.131 { 00:32:04.131 "uuid": "f9705e65-6b75-4c3c-ba4c-973e208a7660", 00:32:04.131 "name": "lvs_0", 00:32:04.131 "base_bdev": "Nvme0n1", 00:32:04.131 "total_data_clusters": 238234, 00:32:04.131 "free_clusters": 233114, 00:32:04.131 "block_size": 512, 00:32:04.131 "cluster_size": 4194304 00:32:04.131 }, 00:32:04.131 { 00:32:04.131 "uuid": "5c50f228-fd28-4460-9b53-2735e8d53aad", 00:32:04.131 "name": "lvs_n_0", 00:32:04.131 "base_bdev": "c796f157-d440-44b9-97b9-ef7fdace97a8", 00:32:04.131 "total_data_clusters": 5114, 00:32:04.131 "free_clusters": 5114, 00:32:04.131 "block_size": 512, 00:32:04.131 "cluster_size": 4194304 00:32:04.131 } 00:32:04.131 ]' 00:32:04.131 23:36:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5c50f228-fd28-4460-9b53-2735e8d53aad") .free_clusters' 00:32:04.131 23:36:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:32:04.131 23:36:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5c50f228-fd28-4460-9b53-2735e8d53aad") .cluster_size' 00:32:04.389 23:36:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:04.389 23:36:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:32:04.389 23:36:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:32:04.389 20456 00:32:04.389 23:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:32:04.389 23:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5c50f228-fd28-4460-9b53-2735e8d53aad lbd_nest_0 20456 00:32:04.389 23:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=b6257654-f158-4e27-9298-c3d2235b1886 00:32:04.389 23:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:04.648 23:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:32:04.648 23:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b6257654-f158-4e27-9298-c3d2235b1886 00:32:04.908 23:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:05.166 23:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:32:05.166 23:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:32:05.166 23:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:05.166 23:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:05.166 23:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:05.166 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.375 Initializing NVMe Controllers 00:32:17.375 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:17.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:17.375 Initialization complete. Launching workers. 00:32:17.375 ======================================================== 00:32:17.375 Latency(us) 00:32:17.375 Device Information : IOPS MiB/s Average min max 00:32:17.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.70 0.02 21465.69 177.99 47546.26 00:32:17.375 ======================================================== 00:32:17.375 Total : 46.70 0.02 21465.69 177.99 47546.26 00:32:17.375 00:32:17.375 23:36:24 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:17.375 23:36:24 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:17.375 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.358 Initializing NVMe Controllers 00:32:27.358 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:27.358 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:27.358 Initialization complete. Launching workers. 00:32:27.358 ======================================================== 00:32:27.358 Latency(us) 00:32:27.358 Device Information : IOPS MiB/s Average min max 00:32:27.358 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 68.00 8.50 14713.23 7889.64 47895.59 00:32:27.358 ======================================================== 00:32:27.358 Total : 68.00 8.50 14713.23 7889.64 47895.59 00:32:27.358 00:32:27.358 23:36:34 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:27.359 23:36:34 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:27.359 23:36:34 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:27.359 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.336 Initializing NVMe Controllers 00:32:37.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:37.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:37.336 Initialization complete. Launching workers. 00:32:37.336 ======================================================== 00:32:37.336 Latency(us) 00:32:37.336 Device Information : IOPS MiB/s Average min max 00:32:37.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8048.53 3.93 3975.40 249.40 9698.70 00:32:37.336 ======================================================== 00:32:37.336 Total : 8048.53 3.93 3975.40 249.40 9698.70 00:32:37.336 00:32:37.336 23:36:45 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:37.336 23:36:45 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:37.336 EAL: No free 2048 kB hugepages reported on node 1 00:32:47.309 Initializing NVMe Controllers 00:32:47.309 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:47.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:47.309 Initialization complete. Launching workers. 00:32:47.309 ======================================================== 00:32:47.309 Latency(us) 00:32:47.309 Device Information : IOPS MiB/s Average min max 00:32:47.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2594.00 324.25 12346.16 690.12 30503.39 00:32:47.309 ======================================================== 00:32:47.309 Total : 2594.00 324.25 12346.16 690.12 30503.39 00:32:47.309 00:32:47.309 23:36:55 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:47.309 23:36:55 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:47.309 23:36:55 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:47.309 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.295 Initializing NVMe Controllers 00:32:57.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:57.295 Controller IO queue size 128, less than required. 00:32:57.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:57.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:57.295 Initialization complete. Launching workers. 00:32:57.295 ======================================================== 00:32:57.295 Latency(us) 00:32:57.295 Device Information : IOPS MiB/s Average min max 00:32:57.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12644.49 6.17 10130.54 1679.50 26024.67 00:32:57.295 ======================================================== 00:32:57.295 Total : 12644.49 6.17 10130.54 1679.50 26024.67 00:32:57.295 00:32:57.295 23:37:06 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:57.295 23:37:06 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:57.295 EAL: No free 2048 kB hugepages reported on node 1 00:33:09.573 Initializing NVMe Controllers 00:33:09.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:09.573 Controller IO queue size 128, less than required. 00:33:09.573 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:09.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:09.573 Initialization complete. Launching workers. 00:33:09.573 ======================================================== 00:33:09.573 Latency(us) 00:33:09.573 Device Information : IOPS MiB/s Average min max 00:33:09.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1197.10 149.64 107262.45 22932.42 231225.12 00:33:09.573 ======================================================== 00:33:09.573 Total : 1197.10 149.64 107262.45 22932.42 231225.12 00:33:09.573 00:33:09.573 23:37:16 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:09.573 23:37:16 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b6257654-f158-4e27-9298-c3d2235b1886 00:33:09.573 23:37:17 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:09.573 23:37:17 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c796f157-d440-44b9-97b9-ef7fdace97a8 00:33:09.573 23:37:17 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:09.573 rmmod nvme_tcp 00:33:09.573 rmmod nvme_fabrics 00:33:09.573 rmmod nvme_keyring 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2579173 ']' 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2579173 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2579173 ']' 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2579173 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2579173 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2579173' 00:33:09.573 killing process with pid 2579173 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2579173 00:33:09.573 23:37:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2579173 00:33:12.106 23:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:12.106 23:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:12.106 23:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:12.106 23:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:12.106 23:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:12.106 23:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.106 23:37:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:12.106 23:37:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.010 23:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:14.010 00:33:14.010 real 1m36.153s 00:33:14.010 user 5m47.514s 00:33:14.010 sys 0m14.586s 00:33:14.010 23:37:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:14.010 23:37:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:14.010 ************************************ 00:33:14.010 END TEST nvmf_perf 00:33:14.010 ************************************ 00:33:14.010 23:37:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:14.010 23:37:22 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:14.010 23:37:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:14.010 23:37:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:14.010 23:37:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:14.010 ************************************ 00:33:14.010 START TEST nvmf_fio_host 00:33:14.010 ************************************ 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:14.010 * Looking for test storage... 00:33:14.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:14.010 23:37:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:33:14.011 23:37:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:19.282 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:19.282 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.282 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:19.283 Found net devices under 0000:86:00.0: cvl_0_0 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:19.283 Found net devices under 0000:86:00.1: cvl_0_1 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:19.283 23:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:19.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:19.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:33:19.283 00:33:19.283 --- 10.0.0.2 ping statistics --- 00:33:19.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.283 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:19.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:19.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:33:19.283 00:33:19.283 --- 10.0.0.1 ping statistics --- 00:33:19.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.283 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2597252 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2597252 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2597252 ']' 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:19.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:19.283 23:37:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.283 [2024-07-10 23:37:28.258348] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:33:19.283 [2024-07-10 23:37:28.258431] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:19.283 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.557 [2024-07-10 23:37:28.367647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:19.557 [2024-07-10 23:37:28.579830] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:19.557 [2024-07-10 23:37:28.579872] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:19.557 [2024-07-10 23:37:28.579884] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:19.557 [2024-07-10 23:37:28.579893] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:19.557 [2024-07-10 23:37:28.579901] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:19.557 [2024-07-10 23:37:28.580041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.557 [2024-07-10 23:37:28.580113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:19.557 [2024-07-10 23:37:28.580185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.557 [2024-07-10 23:37:28.580197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:20.122 23:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:20.122 23:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:33:20.122 23:37:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:20.380 [2024-07-10 23:37:29.193946] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.380 23:37:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:20.380 23:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:20.380 23:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.380 23:37:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:20.638 Malloc1 00:33:20.638 23:37:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:20.906 23:37:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:20.906 23:37:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:21.166 [2024-07-10 23:37:30.058909] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:21.166 23:37:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:21.424 23:37:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:21.424 23:37:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:21.425 23:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:21.425 23:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:21.425 23:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:21.425 23:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:21.425 23:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:21.425 23:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:21.425 23:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:21.425 23:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:21.425 23:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:21.425 23:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:21.425 23:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:21.425 23:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:21.425 23:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:21.425 23:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:33:21.425 23:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:21.425 23:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:21.683 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:21.683 fio-3.35 00:33:21.683 Starting 1 thread 00:33:21.683 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.214 00:33:24.214 test: (groupid=0, jobs=1): err= 0: pid=2597775: Wed Jul 10 23:37:32 2024 00:33:24.214 read: IOPS=9930, BW=38.8MiB/s (40.7MB/s)(77.8MiB/2006msec) 00:33:24.214 slat (nsec): min=1820, max=272902, avg=2036.82, stdev=2621.69 00:33:24.214 clat (usec): min=3531, max=11876, avg=7059.81, stdev=536.07 00:33:24.214 lat (usec): min=3580, max=11878, avg=7061.84, stdev=535.91 00:33:24.214 clat percentiles (usec): 00:33:24.214 | 1.00th=[ 5800], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6652], 00:33:24.214 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:33:24.214 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7701], 95.00th=[ 7898], 00:33:24.214 | 99.00th=[ 8225], 99.50th=[ 8291], 99.90th=[10814], 99.95th=[11207], 00:33:24.214 | 99.99th=[11863] 00:33:24.214 bw ( KiB/s): min=38480, max=40392, per=99.95%, avg=39704.00, stdev=886.45, samples=4 00:33:24.214 iops : min= 9620, max=10098, avg=9926.00, stdev=221.61, samples=4 00:33:24.214 write: IOPS=9951, BW=38.9MiB/s (40.8MB/s)(78.0MiB/2006msec); 0 zone resets 00:33:24.214 slat (nsec): min=1897, max=235093, avg=2126.60, stdev=1890.35 00:33:24.214 clat (usec): min=2756, max=11166, avg=5735.41, stdev=450.85 00:33:24.214 lat (usec): min=2778, max=11168, avg=5737.54, stdev=450.75 00:33:24.214 clat percentiles (usec): 00:33:24.214 | 1.00th=[ 4752], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:33:24.214 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5800], 00:33:24.214 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6259], 95.00th=[ 6390], 00:33:24.214 | 99.00th=[ 6652], 99.50th=[ 6849], 99.90th=[ 9503], 99.95th=[10814], 00:33:24.214 | 99.99th=[11207] 00:33:24.214 bw ( KiB/s): min=39000, max=40336, per=100.00%, avg=39808.00, stdev=584.42, samples=4 00:33:24.214 iops : min= 9750, max=10084, avg=9952.00, stdev=146.10, samples=4 00:33:24.214 lat (msec) : 4=0.05%, 10=99.85%, 20=0.10% 00:33:24.214 cpu : usr=74.91%, sys=23.09%, ctx=70, majf=0, minf=1533 00:33:24.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:24.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:24.214 issued rwts: total=19921,19962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:24.214 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:24.214 00:33:24.214 Run status group 0 (all jobs): 00:33:24.214 READ: bw=38.8MiB/s (40.7MB/s), 38.8MiB/s-38.8MiB/s (40.7MB/s-40.7MB/s), io=77.8MiB (81.6MB), run=2006-2006msec 00:33:24.214 WRITE: bw=38.9MiB/s (40.8MB/s), 38.9MiB/s-38.9MiB/s (40.8MB/s-40.8MB/s), io=78.0MiB (81.8MB), run=2006-2006msec 00:33:24.214 ----------------------------------------------------- 00:33:24.214 Suppressions used: 00:33:24.214 count bytes template 00:33:24.214 1 57 /usr/src/fio/parse.c 00:33:24.214 1 8 libtcmalloc_minimal.so 00:33:24.214 ----------------------------------------------------- 00:33:24.214 00:33:24.214 23:37:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:24.214 23:37:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:24.214 23:37:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:24.214 23:37:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:24.214 23:37:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:24.214 23:37:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:24.214 23:37:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:24.214 23:37:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:24.214 23:37:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:24.214 23:37:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:24.214 23:37:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:24.214 23:37:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:24.499 23:37:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:24.499 23:37:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:24.499 23:37:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:33:24.499 23:37:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:24.499 23:37:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:24.757 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:24.757 fio-3.35 00:33:24.757 Starting 1 thread 00:33:24.757 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.289 00:33:27.289 test: (groupid=0, jobs=1): err= 0: pid=2598405: Wed Jul 10 23:37:36 2024 00:33:27.289 read: IOPS=8673, BW=136MiB/s (142MB/s)(277MiB/2047msec) 00:33:27.289 slat (nsec): min=2909, max=93537, avg=3276.47, stdev=1413.54 00:33:27.289 clat (usec): min=3568, max=53207, avg=8748.76, stdev=6175.17 00:33:27.289 lat (usec): min=3571, max=53212, avg=8752.03, stdev=6175.25 00:33:27.289 clat percentiles (usec): 00:33:27.289 | 1.00th=[ 4490], 5.00th=[ 5276], 10.00th=[ 5735], 20.00th=[ 6390], 00:33:27.289 | 30.00th=[ 6915], 40.00th=[ 7373], 50.00th=[ 7898], 60.00th=[ 8291], 00:33:27.289 | 70.00th=[ 8586], 80.00th=[ 9372], 90.00th=[10421], 95.00th=[11600], 00:33:27.289 | 99.00th=[49021], 99.50th=[51643], 99.90th=[53216], 99.95th=[53216], 00:33:27.289 | 99.99th=[53216] 00:33:27.289 bw ( KiB/s): min=60224, max=85536, per=52.08%, avg=72272.00, stdev=13092.25, samples=4 00:33:27.289 iops : min= 3764, max= 5346, avg=4517.00, stdev=818.27, samples=4 00:33:27.289 write: IOPS=5502, BW=86.0MiB/s (90.2MB/s)(147MiB/1713msec); 0 zone resets 00:33:27.289 slat (usec): min=30, max=194, avg=32.72, stdev= 4.31 00:33:27.289 clat (usec): min=3731, max=17143, avg=10129.54, stdev=1655.65 00:33:27.289 lat (usec): min=3762, max=17176, avg=10162.26, stdev=1655.71 00:33:27.289 clat percentiles (usec): 00:33:27.289 | 1.00th=[ 6783], 5.00th=[ 7767], 10.00th=[ 8160], 20.00th=[ 8717], 00:33:27.289 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10421], 00:33:27.289 | 70.00th=[10945], 80.00th=[11600], 90.00th=[12387], 95.00th=[13042], 00:33:27.289 | 99.00th=[14091], 99.50th=[14746], 99.90th=[16319], 99.95th=[16712], 00:33:27.289 | 99.99th=[17171] 00:33:27.289 bw ( KiB/s): min=61952, max=89056, per=85.65%, avg=75408.00, stdev=14198.54, samples=4 00:33:27.289 iops : min= 3872, max= 5566, avg=4713.00, stdev=887.41, samples=4 00:33:27.289 lat (msec) : 4=0.14%, 10=74.50%, 20=23.96%, 50=0.90%, 100=0.50% 00:33:27.289 cpu : usr=84.86%, sys=13.92%, ctx=43, majf=0, minf=2299 00:33:27.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:33:27.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:27.289 issued rwts: total=17754,9426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:27.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:27.289 00:33:27.289 Run status group 0 (all jobs): 00:33:27.289 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=277MiB (291MB), run=2047-2047msec 00:33:27.289 WRITE: bw=86.0MiB/s (90.2MB/s), 86.0MiB/s-86.0MiB/s (90.2MB/s-90.2MB/s), io=147MiB (154MB), run=1713-1713msec 00:33:27.289 ----------------------------------------------------- 00:33:27.289 Suppressions used: 00:33:27.289 count bytes template 00:33:27.289 1 57 /usr/src/fio/parse.c 00:33:27.289 706 67776 /usr/src/fio/iolog.c 00:33:27.289 1 8 libtcmalloc_minimal.so 00:33:27.289 ----------------------------------------------------- 00:33:27.289 00:33:27.289 23:37:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:27.547 23:37:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:27.547 23:37:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:27.547 23:37:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:27.547 23:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:33:27.547 23:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:33:27.547 23:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:27.547 23:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:27.547 23:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:33:27.547 23:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:33:27.547 23:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:33:27.547 23:37:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:33:30.833 Nvme0n1 00:33:30.833 23:37:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:34.119 23:37:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=fb7e3f16-0854-492b-93f4-e72a2495323e 00:33:34.119 23:37:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb fb7e3f16-0854-492b-93f4-e72a2495323e 00:33:34.119 23:37:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=fb7e3f16-0854-492b-93f4-e72a2495323e 00:33:34.119 23:37:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:34.119 23:37:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:34.119 23:37:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:34.119 23:37:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:34.119 23:37:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:34.119 { 00:33:34.119 "uuid": "fb7e3f16-0854-492b-93f4-e72a2495323e", 00:33:34.119 "name": "lvs_0", 00:33:34.119 "base_bdev": "Nvme0n1", 00:33:34.119 "total_data_clusters": 930, 00:33:34.119 "free_clusters": 930, 00:33:34.119 "block_size": 512, 00:33:34.119 "cluster_size": 1073741824 00:33:34.119 } 00:33:34.119 ]' 00:33:34.119 23:37:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="fb7e3f16-0854-492b-93f4-e72a2495323e") .free_clusters' 00:33:34.119 23:37:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:33:34.119 23:37:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="fb7e3f16-0854-492b-93f4-e72a2495323e") .cluster_size' 00:33:34.119 23:37:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:33:34.119 23:37:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:33:34.119 23:37:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:33:34.119 952320 00:33:34.119 23:37:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:33:34.119 8e12de53-09a1-40b6-88ec-7f92e87fbf9f 00:33:34.119 23:37:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:34.379 23:37:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:34.637 23:37:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:34.637 23:37:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:34.638 23:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:34.638 23:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:34.638 23:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:34.638 23:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:34.638 23:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:34.638 23:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:34.638 23:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:34.638 23:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:34.638 23:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:34.638 23:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:34.638 23:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:34.638 23:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:34.638 23:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:34.638 23:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:33:34.638 23:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:34.638 23:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:35.204 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:35.204 fio-3.35 00:33:35.204 Starting 1 thread 00:33:35.204 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.805 00:33:37.805 test: (groupid=0, jobs=1): err= 0: pid=2600143: Wed Jul 10 23:37:46 2024 00:33:37.805 read: IOPS=6777, BW=26.5MiB/s (27.8MB/s)(53.1MiB/2006msec) 00:33:37.805 slat (nsec): min=1818, max=107264, avg=2001.02, stdev=1296.60 00:33:37.805 clat (usec): min=708, max=170828, avg=10385.13, stdev=11083.53 00:33:37.805 lat (usec): min=710, max=170853, avg=10387.13, stdev=11083.74 00:33:37.805 clat percentiles (msec): 00:33:37.805 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 9], 00:33:37.805 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:33:37.805 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 11], 00:33:37.805 | 99.00th=[ 12], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:33:37.805 | 99.99th=[ 171] 00:33:37.805 bw ( KiB/s): min=19008, max=29968, per=99.74%, avg=27038.00, stdev=5359.13, samples=4 00:33:37.805 iops : min= 4752, max= 7492, avg=6759.50, stdev=1339.78, samples=4 00:33:37.805 write: IOPS=6773, BW=26.5MiB/s (27.7MB/s)(53.1MiB/2006msec); 0 zone resets 00:33:37.805 slat (nsec): min=1894, max=98174, avg=2089.23, stdev=922.03 00:33:37.805 clat (usec): min=306, max=169111, avg=8371.46, stdev=10375.56 00:33:37.805 lat (usec): min=308, max=169118, avg=8373.55, stdev=10375.80 00:33:37.805 clat percentiles (msec): 00:33:37.805 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:33:37.805 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:33:37.805 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:33:37.805 | 99.00th=[ 10], 99.50th=[ 13], 99.90th=[ 169], 99.95th=[ 169], 00:33:37.805 | 99.99th=[ 169] 00:33:37.805 bw ( KiB/s): min=20008, max=29536, per=99.92%, avg=27074.00, stdev=4711.56, samples=4 00:33:37.805 iops : min= 5002, max= 7384, avg=6768.50, stdev=1177.89, samples=4 00:33:37.805 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:33:37.805 lat (msec) : 2=0.02%, 4=0.18%, 10=83.64%, 20=15.67%, 250=0.47% 00:33:37.806 cpu : usr=71.22%, sys=26.98%, ctx=90, majf=0, minf=1530 00:33:37.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:37.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:37.806 issued rwts: total=13595,13588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.806 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:37.806 00:33:37.806 Run status group 0 (all jobs): 00:33:37.806 READ: bw=26.5MiB/s (27.8MB/s), 26.5MiB/s-26.5MiB/s (27.8MB/s-27.8MB/s), io=53.1MiB (55.7MB), run=2006-2006msec 00:33:37.806 WRITE: bw=26.5MiB/s (27.7MB/s), 26.5MiB/s-26.5MiB/s (27.7MB/s-27.7MB/s), io=53.1MiB (55.7MB), run=2006-2006msec 00:33:37.806 ----------------------------------------------------- 00:33:37.806 Suppressions used: 00:33:37.806 count bytes template 00:33:37.806 1 58 /usr/src/fio/parse.c 00:33:37.806 1 8 libtcmalloc_minimal.so 00:33:37.806 ----------------------------------------------------- 00:33:37.806 00:33:37.806 23:37:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:38.064 23:37:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:39.000 23:37:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=a4290b7d-458f-42bc-b06b-c639b4f06ea4 00:33:39.000 23:37:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb a4290b7d-458f-42bc-b06b-c639b4f06ea4 00:33:39.000 23:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=a4290b7d-458f-42bc-b06b-c639b4f06ea4 00:33:39.000 23:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:39.000 23:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:39.000 23:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:39.000 23:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:39.258 23:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:39.258 { 00:33:39.258 "uuid": "fb7e3f16-0854-492b-93f4-e72a2495323e", 00:33:39.258 "name": "lvs_0", 00:33:39.258 "base_bdev": "Nvme0n1", 00:33:39.258 "total_data_clusters": 930, 00:33:39.258 "free_clusters": 0, 00:33:39.258 "block_size": 512, 00:33:39.258 "cluster_size": 1073741824 00:33:39.258 }, 00:33:39.258 { 00:33:39.258 "uuid": "a4290b7d-458f-42bc-b06b-c639b4f06ea4", 00:33:39.258 "name": "lvs_n_0", 00:33:39.258 "base_bdev": "8e12de53-09a1-40b6-88ec-7f92e87fbf9f", 00:33:39.258 "total_data_clusters": 237847, 00:33:39.258 "free_clusters": 237847, 00:33:39.258 "block_size": 512, 00:33:39.258 "cluster_size": 4194304 00:33:39.258 } 00:33:39.258 ]' 00:33:39.258 23:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a4290b7d-458f-42bc-b06b-c639b4f06ea4") .free_clusters' 00:33:39.258 23:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:33:39.258 23:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a4290b7d-458f-42bc-b06b-c639b4f06ea4") .cluster_size' 00:33:39.258 23:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:33:39.258 23:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:33:39.258 23:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:33:39.258 951388 00:33:39.258 23:37:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:33:40.195 81cc1f64-cd04-4cec-8d29-b7ee96163cca 00:33:40.195 23:37:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:40.454 23:37:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:40.712 23:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:41.278 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:41.278 fio-3.35 00:33:41.278 Starting 1 thread 00:33:41.278 EAL: No free 2048 kB hugepages reported on node 1 00:33:43.812 00:33:43.812 test: (groupid=0, jobs=1): err= 0: pid=2601183: Wed Jul 10 23:37:52 2024 00:33:43.812 read: IOPS=6611, BW=25.8MiB/s (27.1MB/s)(51.9MiB/2008msec) 00:33:43.812 slat (nsec): min=1807, max=171943, avg=2483.34, stdev=3258.38 00:33:43.812 clat (usec): min=3762, max=18328, avg=10595.99, stdev=938.54 00:33:43.812 lat (usec): min=3767, max=18330, avg=10598.47, stdev=938.31 00:33:43.812 clat percentiles (usec): 00:33:43.812 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9896], 00:33:43.812 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:33:43.812 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:33:43.812 | 99.00th=[12649], 99.50th=[12780], 99.90th=[16450], 99.95th=[17695], 00:33:43.812 | 99.99th=[18220] 00:33:43.812 bw ( KiB/s): min=25320, max=27032, per=99.85%, avg=26406.00, stdev=780.90, samples=4 00:33:43.812 iops : min= 6330, max= 6758, avg=6601.50, stdev=195.23, samples=4 00:33:43.812 write: IOPS=6618, BW=25.9MiB/s (27.1MB/s)(51.9MiB/2008msec); 0 zone resets 00:33:43.812 slat (nsec): min=1885, max=167883, avg=2561.75, stdev=3147.20 00:33:43.812 clat (usec): min=1783, max=16409, avg=8656.04, stdev=785.52 00:33:43.812 lat (usec): min=1791, max=16412, avg=8658.60, stdev=785.31 00:33:43.812 clat percentiles (usec): 00:33:43.812 | 1.00th=[ 6849], 5.00th=[ 7439], 10.00th=[ 7767], 20.00th=[ 8029], 00:33:43.812 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8848], 00:33:43.812 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[ 9765], 00:33:43.812 | 99.00th=[10290], 99.50th=[10683], 99.90th=[13566], 99.95th=[13960], 00:33:43.812 | 99.99th=[16319] 00:33:43.812 bw ( KiB/s): min=26240, max=26624, per=99.99%, avg=26470.00, stdev=163.09, samples=4 00:33:43.812 iops : min= 6560, max= 6656, avg=6617.50, stdev=40.77, samples=4 00:33:43.812 lat (msec) : 2=0.01%, 4=0.08%, 10=61.09%, 20=38.82% 00:33:43.812 cpu : usr=66.17%, sys=26.26%, ctx=521, majf=0, minf=1529 00:33:43.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:43.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:43.812 issued rwts: total=13276,13289,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:43.812 00:33:43.812 Run status group 0 (all jobs): 00:33:43.812 READ: bw=25.8MiB/s (27.1MB/s), 25.8MiB/s-25.8MiB/s (27.1MB/s-27.1MB/s), io=51.9MiB (54.4MB), run=2008-2008msec 00:33:43.812 WRITE: bw=25.9MiB/s (27.1MB/s), 25.9MiB/s-25.9MiB/s (27.1MB/s-27.1MB/s), io=51.9MiB (54.4MB), run=2008-2008msec 00:33:43.812 ----------------------------------------------------- 00:33:43.812 Suppressions used: 00:33:43.812 count bytes template 00:33:43.812 1 58 /usr/src/fio/parse.c 00:33:43.812 1 8 libtcmalloc_minimal.so 00:33:43.812 ----------------------------------------------------- 00:33:43.812 00:33:43.812 23:37:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:44.071 23:37:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:44.071 23:37:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:48.260 23:37:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:48.260 23:37:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:51.546 23:37:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:51.546 23:38:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:52.920 23:38:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:52.920 23:38:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:52.920 23:38:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:52.920 23:38:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:52.920 23:38:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:33:52.920 23:38:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:52.920 23:38:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:33:52.920 23:38:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:52.920 23:38:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:52.920 rmmod nvme_tcp 00:33:52.920 rmmod nvme_fabrics 00:33:52.920 rmmod nvme_keyring 00:33:53.179 23:38:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:53.179 23:38:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:33:53.179 23:38:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:33:53.179 23:38:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2597252 ']' 00:33:53.179 23:38:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2597252 00:33:53.179 23:38:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2597252 ']' 00:33:53.179 23:38:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2597252 00:33:53.179 23:38:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:33:53.179 23:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:53.179 23:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2597252 00:33:53.179 23:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:53.179 23:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:53.179 23:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2597252' 00:33:53.179 killing process with pid 2597252 00:33:53.179 23:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2597252 00:33:53.179 23:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2597252 00:33:54.556 23:38:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:54.556 23:38:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:54.556 23:38:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:54.556 23:38:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:54.556 23:38:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:54.556 23:38:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.556 23:38:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:54.556 23:38:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.086 23:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:57.086 00:33:57.086 real 0m42.765s 00:33:57.086 user 2m50.746s 00:33:57.086 sys 0m9.545s 00:33:57.086 23:38:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:57.086 23:38:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.086 ************************************ 00:33:57.086 END TEST nvmf_fio_host 00:33:57.086 ************************************ 00:33:57.086 23:38:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:57.086 23:38:05 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:57.086 23:38:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:57.086 23:38:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:57.086 23:38:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:57.086 ************************************ 00:33:57.086 START TEST nvmf_failover 00:33:57.086 ************************************ 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:57.086 * Looking for test storage... 00:33:57.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:33:57.086 23:38:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:02.360 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:02.361 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:02.361 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:02.361 Found net devices under 0000:86:00.0: cvl_0_0 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:02.361 Found net devices under 0000:86:00.1: cvl_0_1 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:02.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:02.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:34:02.361 00:34:02.361 --- 10.0.0.2 ping statistics --- 00:34:02.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:02.361 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:02.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:02.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:34:02.361 00:34:02.361 --- 10.0.0.1 ping statistics --- 00:34:02.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:02.361 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2606746 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2606746 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2606746 ']' 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:02.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:02.361 23:38:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:02.361 [2024-07-10 23:38:11.023665] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:34:02.361 [2024-07-10 23:38:11.023747] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:02.361 EAL: No free 2048 kB hugepages reported on node 1 00:34:02.361 [2024-07-10 23:38:11.132948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:02.361 [2024-07-10 23:38:11.341994] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:02.361 [2024-07-10 23:38:11.342036] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:02.361 [2024-07-10 23:38:11.342050] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:02.361 [2024-07-10 23:38:11.342075] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:02.361 [2024-07-10 23:38:11.342085] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:02.361 [2024-07-10 23:38:11.342227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:02.361 [2024-07-10 23:38:11.342290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.361 [2024-07-10 23:38:11.342300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:02.930 23:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:02.930 23:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:34:02.930 23:38:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:02.930 23:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:02.930 23:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:02.930 23:38:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:02.930 23:38:11 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:02.930 [2024-07-10 23:38:11.981619] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:03.189 23:38:12 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:03.448 Malloc0 00:34:03.448 23:38:12 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:03.448 23:38:12 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:03.707 23:38:12 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:03.966 [2024-07-10 23:38:12.803823] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:03.966 23:38:12 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:03.966 [2024-07-10 23:38:12.988364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:03.966 23:38:13 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:04.224 [2024-07-10 23:38:13.160931] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:04.224 23:38:13 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:34:04.224 23:38:13 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2607013 00:34:04.224 23:38:13 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:04.224 23:38:13 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2607013 /var/tmp/bdevperf.sock 00:34:04.224 23:38:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2607013 ']' 00:34:04.224 23:38:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:04.224 23:38:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:04.224 23:38:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:04.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:04.224 23:38:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:04.224 23:38:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:05.248 23:38:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:05.248 23:38:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:34:05.248 23:38:14 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:05.248 NVMe0n1 00:34:05.248 23:38:14 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:05.536 00:34:05.536 23:38:14 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2607249 00:34:05.536 23:38:14 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:05.536 23:38:14 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:34:06.914 23:38:15 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:06.914 [2024-07-10 23:38:15.751617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.914 [2024-07-10 23:38:15.751688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.914 [2024-07-10 23:38:15.751699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.914 [2024-07-10 23:38:15.751708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.914 [2024-07-10 23:38:15.751716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.914 [2024-07-10 23:38:15.751725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.914 [2024-07-10 23:38:15.751733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.914 [2024-07-10 23:38:15.751741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.914 [2024-07-10 23:38:15.751749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.915 [2024-07-10 23:38:15.751757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.915 [2024-07-10 23:38:15.751764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.915 [2024-07-10 23:38:15.751772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.915 [2024-07-10 23:38:15.751780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.915 [2024-07-10 23:38:15.751788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.915 [2024-07-10 23:38:15.751796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.915 [2024-07-10 23:38:15.751804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.915 [2024-07-10 23:38:15.751812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.915 [2024-07-10 23:38:15.751820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.915 [2024-07-10 23:38:15.751828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.915 [2024-07-10 23:38:15.751836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.915 [2024-07-10 23:38:15.751844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.915 [2024-07-10 23:38:15.751852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.915 [2024-07-10 23:38:15.751859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:06.915 23:38:15 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:34:10.206 23:38:18 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:10.206 00:34:10.206 23:38:19 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:10.464 [2024-07-10 23:38:19.361366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.464 [2024-07-10 23:38:19.361424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.464 [2024-07-10 23:38:19.361435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.464 [2024-07-10 23:38:19.361444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.464 [2024-07-10 23:38:19.361453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.464 [2024-07-10 23:38:19.361461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.464 [2024-07-10 23:38:19.361470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.464 [2024-07-10 23:38:19.361478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.464 [2024-07-10 23:38:19.361486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.464 [2024-07-10 23:38:19.361494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.464 [2024-07-10 23:38:19.361502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.464 [2024-07-10 23:38:19.361510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.464 [2024-07-10 23:38:19.361519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 [2024-07-10 23:38:19.361979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:10.465 23:38:19 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:13.751 23:38:22 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:13.751 [2024-07-10 23:38:22.559385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:13.751 23:38:22 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:14.696 23:38:23 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:14.696 [2024-07-10 23:38:23.758622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:14.696 [2024-07-10 23:38:23.758675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:14.696 [2024-07-10 23:38:23.758686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:14.696 [2024-07-10 23:38:23.758695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:14.696 [2024-07-10 23:38:23.758704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:14.696 [2024-07-10 23:38:23.758712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:14.696 [2024-07-10 23:38:23.758725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:14.696 [2024-07-10 23:38:23.758733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:14.696 [2024-07-10 23:38:23.758742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:14.696 [2024-07-10 23:38:23.758750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:14.696 [2024-07-10 23:38:23.758759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:14.696 [2024-07-10 23:38:23.758767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:14.696 [2024-07-10 23:38:23.758775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:14.696 [2024-07-10 23:38:23.758783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:14.696 [2024-07-10 23:38:23.758791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:14.957 23:38:23 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2607249 00:34:21.525 0 00:34:21.525 23:38:29 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2607013 00:34:21.525 23:38:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2607013 ']' 00:34:21.525 23:38:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2607013 00:34:21.525 23:38:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:34:21.525 23:38:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:21.525 23:38:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2607013 00:34:21.525 23:38:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:21.525 23:38:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:21.525 23:38:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2607013' 00:34:21.525 killing process with pid 2607013 00:34:21.525 23:38:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2607013 00:34:21.525 23:38:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2607013 00:34:21.789 23:38:30 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:21.789 [2024-07-10 23:38:13.246973] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:34:21.789 [2024-07-10 23:38:13.247071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607013 ] 00:34:21.789 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.789 [2024-07-10 23:38:13.352202] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:21.789 [2024-07-10 23:38:13.571611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.789 Running I/O for 15 seconds... 00:34:21.789 [2024-07-10 23:38:15.752742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.789 [2024-07-10 23:38:15.752785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.789 [2024-07-10 23:38:15.752817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.789 [2024-07-10 23:38:15.752830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.789 [2024-07-10 23:38:15.752844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.789 [2024-07-10 23:38:15.752854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.789 [2024-07-10 23:38:15.752866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.789 [2024-07-10 23:38:15.752876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.789 [2024-07-10 23:38:15.752888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.789 [2024-07-10 23:38:15.752898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.789 [2024-07-10 23:38:15.752909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.789 [2024-07-10 23:38:15.752918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.789 [2024-07-10 23:38:15.752930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.789 [2024-07-10 23:38:15.752939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.789 [2024-07-10 23:38:15.752951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.789 [2024-07-10 23:38:15.752961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.752972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.752981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.752992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.790 [2024-07-10 23:38:15.753733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.790 [2024-07-10 23:38:15.753744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.753753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.753764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.753773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.753784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.753794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.753805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.753814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.753825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.753834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.753845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.753857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.753868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.753877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.753888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.753897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.753908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.753918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.753928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.753938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.753949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.753958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.753969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.753978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.753989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.753998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.754009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.754019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.754029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.754038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.754050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.754059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.754070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.754079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.754090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.754099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.754112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.754121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.754132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.754142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.754153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.754166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.754177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.754187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.754198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.754207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.754218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.754228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.754239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.754248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.754259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.754268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.754279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.754289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.754300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.754309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.791 [2024-07-10 23:38:15.754320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.791 [2024-07-10 23:38:15.754329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.754987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.792 [2024-07-10 23:38:15.754996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.755008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.792 [2024-07-10 23:38:15.755019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.755030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.792 [2024-07-10 23:38:15.755040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.755051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.792 [2024-07-10 23:38:15.755060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.755071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.792 [2024-07-10 23:38:15.755080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.755091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.792 [2024-07-10 23:38:15.755101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.755111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.792 [2024-07-10 23:38:15.755120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.792 [2024-07-10 23:38:15.755131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.793 [2024-07-10 23:38:15.755140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.793 [2024-07-10 23:38:15.755166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.793 [2024-07-10 23:38:15.755188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.793 [2024-07-10 23:38:15.755208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.793 [2024-07-10 23:38:15.755228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.793 [2024-07-10 23:38:15.755248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.793 [2024-07-10 23:38:15.755269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.793 [2024-07-10 23:38:15.755289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.793 [2024-07-10 23:38:15.755309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.793 [2024-07-10 23:38:15.755329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.793 [2024-07-10 23:38:15.755350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.793 [2024-07-10 23:38:15.755370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.793 [2024-07-10 23:38:15.755390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.793 [2024-07-10 23:38:15.755410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.793 [2024-07-10 23:38:15.755431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.793 [2024-07-10 23:38:15.755451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.793 [2024-07-10 23:38:15.755471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(5) to be set 00:34:21.793 [2024-07-10 23:38:15.755494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:21.793 [2024-07-10 23:38:15.755504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:21.793 [2024-07-10 23:38:15.755514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83104 len:8 PRP1 0x0 PRP2 0x0 00:34:21.793 [2024-07-10 23:38:15.755524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755810] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500032dc80 was disconnected and freed. reset controller. 00:34:21.793 [2024-07-10 23:38:15.755826] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:21.793 [2024-07-10 23:38:15.755858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.793 [2024-07-10 23:38:15.755869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.793 [2024-07-10 23:38:15.755889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.793 [2024-07-10 23:38:15.755908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.793 [2024-07-10 23:38:15.755928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:15.755937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.793 [2024-07-10 23:38:15.755981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:21.793 [2024-07-10 23:38:15.759127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.793 [2024-07-10 23:38:15.923624] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:21.793 [2024-07-10 23:38:19.361975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.793 [2024-07-10 23:38:19.362017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:19.362037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.793 [2024-07-10 23:38:19.362050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:19.362061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.793 [2024-07-10 23:38:19.362070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:19.362081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.793 [2024-07-10 23:38:19.362090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:19.362099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:21.793 [2024-07-10 23:38:19.362169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.793 [2024-07-10 23:38:19.362183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:19.362202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.793 [2024-07-10 23:38:19.362212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:19.362224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.793 [2024-07-10 23:38:19.362234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:19.362245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.793 [2024-07-10 23:38:19.362254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:19.362265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.793 [2024-07-10 23:38:19.362275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:19.362286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.793 [2024-07-10 23:38:19.362297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:19.362308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.793 [2024-07-10 23:38:19.362317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:19.362328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.793 [2024-07-10 23:38:19.362338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:19.362349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.793 [2024-07-10 23:38:19.362359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.793 [2024-07-10 23:38:19.362370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.362989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.362998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.363009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.363018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.363029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.363038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.363050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.363059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.794 [2024-07-10 23:38:19.363070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.794 [2024-07-10 23:38:19.363079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.795 [2024-07-10 23:38:19.363101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.795 [2024-07-10 23:38:19.363121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.795 [2024-07-10 23:38:19.363140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.795 [2024-07-10 23:38:19.363169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.795 [2024-07-10 23:38:19.363190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.795 [2024-07-10 23:38:19.363212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.795 [2024-07-10 23:38:19.363232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.795 [2024-07-10 23:38:19.363253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.795 [2024-07-10 23:38:19.363273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.795 [2024-07-10 23:38:19.363293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.795 [2024-07-10 23:38:19.363762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.795 [2024-07-10 23:38:19.363774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.363783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.363793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.363803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.363813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.363822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.363833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.363842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.363853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.363862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.363873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.363882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.363892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.363901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.363912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.363921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.363932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.363941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.363952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.363962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.363973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.363982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.363997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.796 [2024-07-10 23:38:19.364521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.796 [2024-07-10 23:38:19.364532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:19.364541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:19.364551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:19.364560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:19.364571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:19.364580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:19.364591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:19.364600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:19.364611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:19.364620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:19.364631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:19.364640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:19.364652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:19.364662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:19.364672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:19.364681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:19.364692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:19.364702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:19.364712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:19.364729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:19.364741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:19.364753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:19.364764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:19.364773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:19.364784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:19.364795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:19.364806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:19.364815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:19.364853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:21.797 [2024-07-10 23:38:19.364863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:21.797 [2024-07-10 23:38:19.364872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:728 len:8 PRP1 0x0 PRP2 0x0 00:34:21.797 [2024-07-10 23:38:19.364882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:19.365187] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500032df00 was disconnected and freed. reset controller. 00:34:21.797 [2024-07-10 23:38:19.365200] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:34:21.797 [2024-07-10 23:38:19.365211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.797 [2024-07-10 23:38:19.368347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.797 [2024-07-10 23:38:19.368395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:21.797 [2024-07-10 23:38:19.410930] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:21.797 [2024-07-10 23:38:23.759342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.797 [2024-07-10 23:38:23.759865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.797 [2024-07-10 23:38:23.759886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.797 [2024-07-10 23:38:23.759908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.797 [2024-07-10 23:38:23.759919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.797 [2024-07-10 23:38:23.759928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.759939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.759948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.759959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.759969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.759980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.759989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.798 [2024-07-10 23:38:23.760849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.798 [2024-07-10 23:38:23.760859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.760870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.760879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.760890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.760899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.760910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.760920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.760931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.760941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.760951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.760960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.760972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.760981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.760992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.799 [2024-07-10 23:38:23.761411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.799 [2024-07-10 23:38:23.761432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.799 [2024-07-10 23:38:23.761633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.799 [2024-07-10 23:38:23.761642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.761655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.800 [2024-07-10 23:38:23.761664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.761675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.800 [2024-07-10 23:38:23.761684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.761695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.800 [2024-07-10 23:38:23.761705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.761715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.800 [2024-07-10 23:38:23.761724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.761736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.800 [2024-07-10 23:38:23.761745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.761756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.800 [2024-07-10 23:38:23.761765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.761775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.800 [2024-07-10 23:38:23.761786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.761798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.800 [2024-07-10 23:38:23.761807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.761818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.800 [2024-07-10 23:38:23.761828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.761838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.800 [2024-07-10 23:38:23.761847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.761859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.800 [2024-07-10 23:38:23.761868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.761879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.800 [2024-07-10 23:38:23.761888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.761899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.800 [2024-07-10 23:38:23.761919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.761931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.800 [2024-07-10 23:38:23.761940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.761951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.800 [2024-07-10 23:38:23.761960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.761972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.800 [2024-07-10 23:38:23.761981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.761992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.800 [2024-07-10 23:38:23.762001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.762012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.800 [2024-07-10 23:38:23.762021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.762032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.800 [2024-07-10 23:38:23.762041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.762052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.800 [2024-07-10 23:38:23.762061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.762072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.800 [2024-07-10 23:38:23.762081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.762105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:21.800 [2024-07-10 23:38:23.762114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:21.800 [2024-07-10 23:38:23.762123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78264 len:8 PRP1 0x0 PRP2 0x0 00:34:21.800 [2024-07-10 23:38:23.762135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.762415] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500032e680 was disconnected and freed. reset controller. 00:34:21.800 [2024-07-10 23:38:23.762429] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:34:21.800 [2024-07-10 23:38:23.762458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.800 [2024-07-10 23:38:23.762469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.762479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.800 [2024-07-10 23:38:23.762489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.762502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.800 [2024-07-10 23:38:23.762511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.762521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.800 [2024-07-10 23:38:23.762530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.800 [2024-07-10 23:38:23.762539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.800 [2024-07-10 23:38:23.762581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:21.800 [2024-07-10 23:38:23.765706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.800 [2024-07-10 23:38:23.801774] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:21.800 00:34:21.800 Latency(us) 00:34:21.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:21.800 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:21.800 Verification LBA range: start 0x0 length 0x4000 00:34:21.800 NVMe0n1 : 15.01 9231.55 36.06 668.35 0.00 12904.40 737.28 13962.02 00:34:21.800 =================================================================================================================== 00:34:21.800 Total : 9231.55 36.06 668.35 0.00 12904.40 737.28 13962.02 00:34:21.800 Received shutdown signal, test time was about 15.000000 seconds 00:34:21.800 00:34:21.800 Latency(us) 00:34:21.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:21.800 =================================================================================================================== 00:34:21.800 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:21.800 23:38:30 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:21.800 23:38:30 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:21.800 23:38:30 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:21.800 23:38:30 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2609822 00:34:21.800 23:38:30 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2609822 /var/tmp/bdevperf.sock 00:34:21.800 23:38:30 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:21.800 23:38:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2609822 ']' 00:34:21.800 23:38:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:21.800 23:38:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:21.800 23:38:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:21.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:21.800 23:38:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:21.800 23:38:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:22.738 23:38:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:22.738 23:38:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:34:22.738 23:38:31 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:22.997 [2024-07-10 23:38:31.832976] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:22.997 23:38:31 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:22.997 [2024-07-10 23:38:32.013490] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:22.997 23:38:32 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:23.256 NVMe0n1 00:34:23.514 23:38:32 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:23.514 00:34:23.773 23:38:32 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:24.031 00:34:24.031 23:38:32 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:24.031 23:38:32 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:24.290 23:38:33 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:24.290 23:38:33 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:27.577 23:38:36 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:27.577 23:38:36 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:27.577 23:38:36 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2610714 00:34:27.577 23:38:36 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:27.577 23:38:36 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2610714 00:34:28.954 0 00:34:28.954 23:38:37 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:28.954 [2024-07-10 23:38:30.897033] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:34:28.954 [2024-07-10 23:38:30.897132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2609822 ] 00:34:28.954 EAL: No free 2048 kB hugepages reported on node 1 00:34:28.954 [2024-07-10 23:38:31.002884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.954 [2024-07-10 23:38:31.234182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.954 [2024-07-10 23:38:33.278440] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:28.954 [2024-07-10 23:38:33.278515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.954 [2024-07-10 23:38:33.278532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.954 [2024-07-10 23:38:33.278545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.954 [2024-07-10 23:38:33.278557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.954 [2024-07-10 23:38:33.278568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.954 [2024-07-10 23:38:33.278578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.954 [2024-07-10 23:38:33.278589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.954 [2024-07-10 23:38:33.278600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.954 [2024-07-10 23:38:33.278609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.954 [2024-07-10 23:38:33.278663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.954 [2024-07-10 23:38:33.278690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:28.954 [2024-07-10 23:38:33.289200] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:28.954 Running I/O for 1 seconds... 00:34:28.954 00:34:28.954 Latency(us) 00:34:28.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:28.954 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:28.954 Verification LBA range: start 0x0 length 0x4000 00:34:28.954 NVMe0n1 : 1.01 9464.30 36.97 0.00 0.00 13467.39 2521.71 11169.61 00:34:28.954 =================================================================================================================== 00:34:28.954 Total : 9464.30 36.97 0.00 0.00 13467.39 2521.71 11169.61 00:34:28.954 23:38:37 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:28.954 23:38:37 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:28.954 23:38:37 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:28.954 23:38:37 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:28.954 23:38:37 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:29.213 23:38:38 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:29.471 23:38:38 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:32.759 23:38:41 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:32.759 23:38:41 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:32.759 23:38:41 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2609822 00:34:32.759 23:38:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2609822 ']' 00:34:32.759 23:38:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2609822 00:34:32.759 23:38:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:34:32.759 23:38:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:32.759 23:38:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2609822 00:34:32.759 23:38:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:32.759 23:38:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:32.759 23:38:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2609822' 00:34:32.759 killing process with pid 2609822 00:34:32.759 23:38:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2609822 00:34:32.759 23:38:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2609822 00:34:33.695 23:38:42 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:33.695 23:38:42 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:33.955 rmmod nvme_tcp 00:34:33.955 rmmod nvme_fabrics 00:34:33.955 rmmod nvme_keyring 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2606746 ']' 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2606746 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2606746 ']' 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2606746 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2606746 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2606746' 00:34:33.955 killing process with pid 2606746 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2606746 00:34:33.955 23:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2606746 00:34:35.441 23:38:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:35.441 23:38:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:35.441 23:38:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:35.441 23:38:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:35.441 23:38:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:35.441 23:38:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.441 23:38:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:35.441 23:38:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.976 23:38:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:37.976 00:34:37.976 real 0m40.823s 00:34:37.976 user 2m11.899s 00:34:37.976 sys 0m7.391s 00:34:37.976 23:38:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:37.976 23:38:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:37.976 ************************************ 00:34:37.977 END TEST nvmf_failover 00:34:37.977 ************************************ 00:34:37.977 23:38:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:37.977 23:38:46 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:37.977 23:38:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:37.977 23:38:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:37.977 23:38:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:37.977 ************************************ 00:34:37.977 START TEST nvmf_host_discovery 00:34:37.977 ************************************ 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:37.977 * Looking for test storage... 00:34:37.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:34:37.977 23:38:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.245 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:43.245 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:34:43.245 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:43.245 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:43.245 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:43.245 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:43.245 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:43.245 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:34:43.245 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:43.245 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:34:43.245 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:34:43.245 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:34:43.245 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:34:43.245 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:34:43.245 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:43.246 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:43.246 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:43.246 Found net devices under 0000:86:00.0: cvl_0_0 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:43.246 Found net devices under 0000:86:00.1: cvl_0_1 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:43.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:43.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:34:43.246 00:34:43.246 --- 10.0.0.2 ping statistics --- 00:34:43.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.246 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:43.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:43.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:34:43.246 00:34:43.246 --- 10.0.0.1 ping statistics --- 00:34:43.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.246 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2615365 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2615365 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2615365 ']' 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:43.246 23:38:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.246 [2024-07-10 23:38:52.009224] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:34:43.246 [2024-07-10 23:38:52.009311] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:43.246 EAL: No free 2048 kB hugepages reported on node 1 00:34:43.246 [2024-07-10 23:38:52.117762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.505 [2024-07-10 23:38:52.338287] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:43.505 [2024-07-10 23:38:52.338321] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:43.505 [2024-07-10 23:38:52.338333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:43.505 [2024-07-10 23:38:52.338345] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:43.505 [2024-07-10 23:38:52.338355] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:43.505 [2024-07-10 23:38:52.338382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:43.764 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:43.764 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:34:43.764 23:38:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:43.764 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:43.764 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.764 23:38:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:43.764 23:38:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:43.764 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.764 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.764 [2024-07-10 23:38:52.815960] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:43.764 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.764 23:38:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:43.764 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.764 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.764 [2024-07-10 23:38:52.824109] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:43.764 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.764 23:38:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:43.764 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.764 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.023 null0 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.023 null1 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2615610 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2615610 /tmp/host.sock 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2615610 ']' 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:44.023 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:44.023 23:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.023 [2024-07-10 23:38:52.921388] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:34:44.023 [2024-07-10 23:38:52.921474] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2615610 ] 00:34:44.023 EAL: No free 2048 kB hugepages reported on node 1 00:34:44.023 [2024-07-10 23:38:53.023952] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.281 [2024-07-10 23:38:53.239273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:44.848 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:45.106 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.106 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:45.106 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:45.106 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:45.106 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:45.106 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:45.106 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:45.106 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.106 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.106 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.106 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:45.106 23:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:45.106 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.106 23:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.106 [2024-07-10 23:38:54.003322] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:45.106 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.364 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:45.364 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.364 23:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:45.364 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.364 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:34:45.364 23:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:34:45.931 [2024-07-10 23:38:54.714843] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:45.931 [2024-07-10 23:38:54.714871] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:45.931 [2024-07-10 23:38:54.714903] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:45.931 [2024-07-10 23:38:54.844317] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:46.190 [2024-07-10 23:38:55.069459] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:46.190 [2024-07-10 23:38:55.069487] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:46.190 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:46.190 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:46.190 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:46.190 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:46.190 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:46.190 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:46.190 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.190 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.190 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:46.190 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:46.449 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.450 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.709 [2024-07-10 23:38:55.531627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:46.709 [2024-07-10 23:38:55.532645] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:46.709 [2024-07-10 23:38:55.532688] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.709 [2024-07-10 23:38:55.659084] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:46.709 23:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:34:46.709 [2024-07-10 23:38:55.719800] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:46.709 [2024-07-10 23:38:55.719825] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:46.709 [2024-07-10 23:38:55.719834] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:47.643 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:47.643 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:47.643 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:47.643 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:47.643 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:47.643 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:47.643 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.643 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:47.643 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.643 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.903 [2024-07-10 23:38:56.797011] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:47.903 [2024-07-10 23:38:56.797043] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:47.903 [2024-07-10 23:38:56.800995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:47.903 [2024-07-10 23:38:56.801024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.903 [2024-07-10 23:38:56.801042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.903 [2024-07-10 23:38:56.801069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.903 [2024-07-10 23:38:56.801080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.903 [2024-07-10 23:38:56.801090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.903 [2024-07-10 23:38:56.801100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.903 [2024-07-10 23:38:56.801110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.903 [2024-07-10 23:38:56.801120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:47.903 [2024-07-10 23:38:56.811003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:47.903 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.903 [2024-07-10 23:38:56.821042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:47.903 [2024-07-10 23:38:56.821382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.903 [2024-07-10 23:38:56.821405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d000 with addr=10.0.0.2, port=4420 00:34:47.903 [2024-07-10 23:38:56.821417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:47.903 [2024-07-10 23:38:56.821434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:47.903 [2024-07-10 23:38:56.821458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:47.903 [2024-07-10 23:38:56.821468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:47.903 [2024-07-10 23:38:56.821478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:47.903 [2024-07-10 23:38:56.821500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:47.903 [2024-07-10 23:38:56.831124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:47.904 [2024-07-10 23:38:56.831433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.904 [2024-07-10 23:38:56.831454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d000 with addr=10.0.0.2, port=4420 00:34:47.904 [2024-07-10 23:38:56.831464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:47.904 [2024-07-10 23:38:56.831480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:47.904 [2024-07-10 23:38:56.831503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:47.904 [2024-07-10 23:38:56.831517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:47.904 [2024-07-10 23:38:56.831526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:47.904 [2024-07-10 23:38:56.831540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:47.904 [2024-07-10 23:38:56.841201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:47.904 [2024-07-10 23:38:56.841370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.904 [2024-07-10 23:38:56.841388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d000 with addr=10.0.0.2, port=4420 00:34:47.904 [2024-07-10 23:38:56.841398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:47.904 [2024-07-10 23:38:56.841413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:47.904 [2024-07-10 23:38:56.841427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:47.904 [2024-07-10 23:38:56.841435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:47.904 [2024-07-10 23:38:56.841444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:47.904 [2024-07-10 23:38:56.841458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:47.904 [2024-07-10 23:38:56.851272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controlle 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.904 r 00:34:47.904 [2024-07-10 23:38:56.851514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.904 [2024-07-10 23:38:56.851534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d000 with addr=10.0.0.2, port=4420 00:34:47.904 [2024-07-10 23:38:56.851545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:47.904 [2024-07-10 23:38:56.851561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:47.904 [2024-07-10 23:38:56.851587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:47.904 [2024-07-10 23:38:56.851597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:47.904 [2024-07-10 23:38:56.851606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:47.904 [2024-07-10 23:38:56.851623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.904 [2024-07-10 23:38:56.861351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:47.904 [2024-07-10 23:38:56.861537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.904 [2024-07-10 23:38:56.861556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d000 with addr=10.0.0.2, port=4420 00:34:47.904 [2024-07-10 23:38:56.861567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:47.904 [2024-07-10 23:38:56.861583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:47.904 [2024-07-10 23:38:56.861596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:47.904 [2024-07-10 23:38:56.861606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:47.904 [2024-07-10 23:38:56.861615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:47.904 [2024-07-10 23:38:56.861629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:47.904 [2024-07-10 23:38:56.871423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:47.904 [2024-07-10 23:38:56.871609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.904 [2024-07-10 23:38:56.871628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d000 with addr=10.0.0.2, port=4420 00:34:47.904 [2024-07-10 23:38:56.871639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:47.904 [2024-07-10 23:38:56.871654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:47.904 [2024-07-10 23:38:56.871667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:47.904 [2024-07-10 23:38:56.871676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:47.904 [2024-07-10 23:38:56.871686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:47.904 [2024-07-10 23:38:56.871699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.904 [2024-07-10 23:38:56.881493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:47.904 [2024-07-10 23:38:56.881638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.904 [2024-07-10 23:38:56.881656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d000 with addr=10.0.0.2, port=4420 00:34:47.904 [2024-07-10 23:38:56.881667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:47.904 [2024-07-10 23:38:56.881681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:47.904 [2024-07-10 23:38:56.881694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:47.904 [2024-07-10 23:38:56.881704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:47.904 [2024-07-10 23:38:56.881714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:47.904 [2024-07-10 23:38:56.881727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:47.904 [2024-07-10 23:38:56.883179] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:47.904 [2024-07-10 23:38:56.883212] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.904 23:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:48.163 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.164 23:38:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:49.537 [2024-07-10 23:38:58.205865] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:49.537 [2024-07-10 23:38:58.205888] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:49.537 [2024-07-10 23:38:58.205913] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:49.538 [2024-07-10 23:38:58.333346] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:49.538 [2024-07-10 23:38:58.441002] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:49.538 [2024-07-10 23:38:58.441040] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:49.538 request: 00:34:49.538 { 00:34:49.538 "name": "nvme", 00:34:49.538 "trtype": "tcp", 00:34:49.538 "traddr": "10.0.0.2", 00:34:49.538 "adrfam": "ipv4", 00:34:49.538 "trsvcid": "8009", 00:34:49.538 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:49.538 "wait_for_attach": true, 00:34:49.538 "method": "bdev_nvme_start_discovery", 00:34:49.538 "req_id": 1 00:34:49.538 } 00:34:49.538 Got JSON-RPC error response 00:34:49.538 response: 00:34:49.538 { 00:34:49.538 "code": -17, 00:34:49.538 "message": "File exists" 00:34:49.538 } 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:49.538 request: 00:34:49.538 { 00:34:49.538 "name": "nvme_second", 00:34:49.538 "trtype": "tcp", 00:34:49.538 "traddr": "10.0.0.2", 00:34:49.538 "adrfam": "ipv4", 00:34:49.538 "trsvcid": "8009", 00:34:49.538 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:49.538 "wait_for_attach": true, 00:34:49.538 "method": "bdev_nvme_start_discovery", 00:34:49.538 "req_id": 1 00:34:49.538 } 00:34:49.538 Got JSON-RPC error response 00:34:49.538 response: 00:34:49.538 { 00:34:49.538 "code": -17, 00:34:49.538 "message": "File exists" 00:34:49.538 } 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:49.538 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.796 23:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:50.730 [2024-07-10 23:38:59.684562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:50.730 [2024-07-10 23:38:59.684598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032e180 with addr=10.0.0.2, port=8010 00:34:50.730 [2024-07-10 23:38:59.684648] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:50.730 [2024-07-10 23:38:59.684659] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:50.730 [2024-07-10 23:38:59.684669] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:51.665 [2024-07-10 23:39:00.687197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:51.665 [2024-07-10 23:39:00.687242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032e400 with addr=10.0.0.2, port=8010 00:34:51.665 [2024-07-10 23:39:00.687305] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:51.665 [2024-07-10 23:39:00.687315] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:51.665 [2024-07-10 23:39:00.687325] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:53.040 [2024-07-10 23:39:01.689209] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:53.040 request: 00:34:53.040 { 00:34:53.040 "name": "nvme_second", 00:34:53.040 "trtype": "tcp", 00:34:53.040 "traddr": "10.0.0.2", 00:34:53.040 "adrfam": "ipv4", 00:34:53.040 "trsvcid": "8010", 00:34:53.040 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:53.040 "wait_for_attach": false, 00:34:53.040 "attach_timeout_ms": 3000, 00:34:53.040 "method": "bdev_nvme_start_discovery", 00:34:53.040 "req_id": 1 00:34:53.040 } 00:34:53.040 Got JSON-RPC error response 00:34:53.040 response: 00:34:53.040 { 00:34:53.040 "code": -110, 00:34:53.040 "message": "Connection timed out" 00:34:53.040 } 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2615610 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:53.040 23:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:53.040 rmmod nvme_tcp 00:34:53.040 rmmod nvme_fabrics 00:34:53.041 rmmod nvme_keyring 00:34:53.041 23:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:53.041 23:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:34:53.041 23:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:34:53.041 23:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2615365 ']' 00:34:53.041 23:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2615365 00:34:53.041 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2615365 ']' 00:34:53.041 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2615365 00:34:53.041 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:34:53.041 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:53.041 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2615365 00:34:53.041 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:53.041 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:53.041 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2615365' 00:34:53.041 killing process with pid 2615365 00:34:53.041 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2615365 00:34:53.041 23:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2615365 00:34:54.416 23:39:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:54.416 23:39:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:54.416 23:39:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:54.416 23:39:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:54.416 23:39:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:54.416 23:39:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.416 23:39:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:54.416 23:39:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.319 23:39:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:56.319 00:34:56.319 real 0m18.625s 00:34:56.319 user 0m23.967s 00:34:56.319 sys 0m5.366s 00:34:56.319 23:39:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:56.319 23:39:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:56.319 ************************************ 00:34:56.319 END TEST nvmf_host_discovery 00:34:56.319 ************************************ 00:34:56.319 23:39:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:56.319 23:39:05 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:56.319 23:39:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:56.319 23:39:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:56.319 23:39:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:56.319 ************************************ 00:34:56.319 START TEST nvmf_host_multipath_status 00:34:56.319 ************************************ 00:34:56.319 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:56.319 * Looking for test storage... 00:34:56.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:56.319 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:56.319 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:56.319 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:56.319 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:56.319 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:56.319 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:56.319 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:56.319 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:56.319 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:56.319 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:56.319 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:56.319 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:56.577 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:56.577 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:34:56.578 23:39:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:01.869 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:01.869 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.869 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:01.870 Found net devices under 0000:86:00.0: cvl_0_0 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:01.870 Found net devices under 0000:86:00.1: cvl_0_1 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:01.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:35:01.870 00:35:01.870 --- 10.0.0.2 ping statistics --- 00:35:01.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.870 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:01.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:35:01.870 00:35:01.870 --- 10.0.0.1 ping statistics --- 00:35:01.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.870 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2621197 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2621197 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2621197 ']' 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:01.870 23:39:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:01.870 [2024-07-10 23:39:10.526530] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:35:01.870 [2024-07-10 23:39:10.526615] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:01.870 EAL: No free 2048 kB hugepages reported on node 1 00:35:01.870 [2024-07-10 23:39:10.634368] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:01.870 [2024-07-10 23:39:10.847193] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:01.870 [2024-07-10 23:39:10.847240] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:01.870 [2024-07-10 23:39:10.847254] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:01.870 [2024-07-10 23:39:10.847263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:01.870 [2024-07-10 23:39:10.847272] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:01.870 [2024-07-10 23:39:10.847387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.870 [2024-07-10 23:39:10.847400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.449 23:39:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:02.449 23:39:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:35:02.449 23:39:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:02.449 23:39:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:02.449 23:39:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:02.449 23:39:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:02.449 23:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2621197 00:35:02.449 23:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:02.449 [2024-07-10 23:39:11.495903] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.763 23:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:35:02.763 Malloc0 00:35:02.763 23:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:35:03.022 23:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:03.281 23:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:03.281 [2024-07-10 23:39:12.253939] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:03.281 23:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:03.541 [2024-07-10 23:39:12.418399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:03.541 23:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:35:03.541 23:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2621476 00:35:03.541 23:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:03.541 23:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2621476 /var/tmp/bdevperf.sock 00:35:03.541 23:39:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2621476 ']' 00:35:03.541 23:39:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:03.541 23:39:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:03.541 23:39:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:03.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:03.541 23:39:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:03.541 23:39:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:04.478 23:39:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:04.478 23:39:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:35:04.478 23:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:35:04.478 23:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:35:05.043 Nvme0n1 00:35:05.043 23:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:05.301 Nvme0n1 00:35:05.301 23:39:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:35:05.301 23:39:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:35:07.832 23:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:35:07.832 23:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:07.832 23:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:07.832 23:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:35:08.786 23:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:35:08.786 23:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:08.786 23:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.786 23:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:09.045 23:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.045 23:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:09.045 23:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.045 23:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:09.045 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:09.045 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:09.045 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.045 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:09.304 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.304 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:09.304 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.304 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:09.563 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.563 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:09.563 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.563 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:09.822 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.822 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:09.822 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:09.822 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.822 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.822 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:35:09.822 23:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:10.081 23:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:10.340 23:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:35:11.279 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:35:11.279 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:11.279 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.279 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:11.538 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:11.538 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:11.538 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.538 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:11.797 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.797 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:11.797 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.797 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:11.797 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.797 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:11.797 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.797 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:12.055 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.055 23:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:12.055 23:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.055 23:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:12.313 23:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.313 23:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:12.313 23:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.313 23:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:12.313 23:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.313 23:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:35:12.313 23:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:12.572 23:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:12.830 23:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:35:13.766 23:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:35:13.766 23:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:13.766 23:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.766 23:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:14.026 23:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.026 23:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:14.026 23:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.026 23:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:14.284 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:14.284 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:14.284 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.284 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:14.284 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.284 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:14.284 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.284 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:14.543 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.543 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:14.543 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.543 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:14.801 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.801 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:14.801 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.801 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:15.059 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:15.059 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:35:15.059 23:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:15.059 23:39:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:15.317 23:39:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:35:16.253 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:35:16.253 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:16.253 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.253 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:16.512 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.512 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:16.512 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.512 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:16.772 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:16.772 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:16.772 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.772 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:16.772 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.772 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:16.772 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.772 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:17.031 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:17.031 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:17.031 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:17.031 23:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:17.289 23:39:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:17.289 23:39:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:17.289 23:39:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:17.289 23:39:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:17.289 23:39:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:17.289 23:39:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:35:17.289 23:39:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:17.547 23:39:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:17.805 23:39:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:18.738 23:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:18.738 23:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:18.738 23:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.738 23:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:18.997 23:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:18.997 23:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:18.997 23:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.997 23:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:19.255 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:19.255 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:19.255 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:19.255 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:19.255 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:19.255 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:19.255 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:19.255 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:19.513 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:19.513 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:19.513 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:19.513 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:19.771 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:19.771 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:19.771 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:19.771 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:19.771 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:19.771 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:19.771 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:20.029 23:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:20.287 23:39:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:21.223 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:21.223 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:21.223 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.223 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:21.482 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:21.482 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:21.482 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.482 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:21.482 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.482 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:21.482 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.482 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:21.742 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.742 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:21.742 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.742 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:22.010 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:22.010 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:22.010 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:22.010 23:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:22.010 23:39:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:22.010 23:39:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:22.010 23:39:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:22.010 23:39:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:22.275 23:39:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:22.275 23:39:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:22.534 23:39:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:22.534 23:39:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:22.792 23:39:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:22.792 23:39:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:23.793 23:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:23.793 23:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:23.793 23:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:23.793 23:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:24.052 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:24.052 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:24.052 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:24.052 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:24.311 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:24.311 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:24.311 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:24.311 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:24.569 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:24.569 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:24.569 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:24.569 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:24.569 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:24.569 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:24.569 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:24.569 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:24.827 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:24.827 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:24.827 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:24.827 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:25.086 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:25.086 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:25.086 23:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:25.345 23:39:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:25.345 23:39:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:26.724 23:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:26.724 23:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:26.724 23:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.724 23:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:26.724 23:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:26.724 23:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:26.724 23:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.724 23:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:26.724 23:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.724 23:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:26.724 23:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:26.724 23:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.983 23:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.983 23:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:26.983 23:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.983 23:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:27.242 23:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:27.242 23:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:27.242 23:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:27.243 23:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:27.243 23:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:27.243 23:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:27.243 23:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:27.243 23:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:27.502 23:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:27.502 23:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:27.502 23:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:27.760 23:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:28.019 23:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:28.983 23:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:28.983 23:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:28.983 23:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:28.983 23:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:29.242 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:29.242 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:29.242 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.242 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:29.242 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:29.243 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:29.243 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.243 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:29.501 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:29.501 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:29.501 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.501 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:29.760 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:29.760 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:29.760 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.760 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:29.760 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:29.760 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:29.760 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.760 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:30.019 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:30.019 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:30.019 23:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:30.279 23:39:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:30.537 23:39:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:31.473 23:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:31.473 23:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:31.473 23:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:31.473 23:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:31.732 23:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:31.732 23:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:31.732 23:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:31.732 23:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:31.732 23:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:31.732 23:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:31.732 23:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:31.732 23:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:31.991 23:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:31.991 23:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:31.991 23:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:31.991 23:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:32.249 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:32.249 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:32.249 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:32.249 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:32.249 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:32.249 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:32.249 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:32.249 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:32.508 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:32.508 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2621476 00:35:32.508 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2621476 ']' 00:35:32.508 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2621476 00:35:32.508 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:35:32.508 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:32.508 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2621476 00:35:32.508 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:35:32.508 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:35:32.508 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2621476' 00:35:32.508 killing process with pid 2621476 00:35:32.508 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2621476 00:35:32.508 23:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2621476 00:35:33.077 Connection closed with partial response: 00:35:33.077 00:35:33.077 00:35:33.648 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2621476 00:35:33.648 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:33.648 [2024-07-10 23:39:12.494905] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:35:33.648 [2024-07-10 23:39:12.495000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2621476 ] 00:35:33.648 EAL: No free 2048 kB hugepages reported on node 1 00:35:33.648 [2024-07-10 23:39:12.596544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.648 [2024-07-10 23:39:12.819683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:33.648 Running I/O for 90 seconds... 00:35:33.648 [2024-07-10 23:39:26.515758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.648 [2024-07-10 23:39:26.515816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:33.648 [2024-07-10 23:39:26.515871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.648 [2024-07-10 23:39:26.515884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:33.648 [2024-07-10 23:39:26.515903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.648 [2024-07-10 23:39:26.515914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:33.648 [2024-07-10 23:39:26.515932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.648 [2024-07-10 23:39:26.515942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:33.648 [2024-07-10 23:39:26.515959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.648 [2024-07-10 23:39:26.515970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:33.648 [2024-07-10 23:39:26.515989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.648 [2024-07-10 23:39:26.515999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:33.648 [2024-07-10 23:39:26.516017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.648 [2024-07-10 23:39:26.516027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.516045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.516055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.516072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.516082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.516100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.516111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.516128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.516144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.516168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.516179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.516197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.516207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.516224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.516234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.516251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.516261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.516278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.516287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.516304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.516315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.516934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.516960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.516985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.516996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.649 [2024-07-10 23:39:26.517826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:33.649 [2024-07-10 23:39:26.517844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.517855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.517958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.517972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.517995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.650 [2024-07-10 23:39:26.518592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.650 [2024-07-10 23:39:26.518622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.518943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.518953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.519042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.519054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.519077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.519087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.519115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.519126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.519148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.519158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.519185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.519195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.519217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.519227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.519249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.519259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.519281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.650 [2024-07-10 23:39:26.519291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.650 [2024-07-10 23:39:26.519313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.519969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.519979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.520001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.520011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.520034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.520046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.520068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.520078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.520176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.520189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.520215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.520225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.520250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.520260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.520283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.520294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.520318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.520328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.520352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.520361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.520387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.520397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.520423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.520433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.520458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.520467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.520491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.520501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.520525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.520535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.520559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.520569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.520599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.520609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:26.520633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:26.520643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:39.341588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:39.341637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:39.341688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:39.341701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:39.341720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.651 [2024-07-10 23:39:39.341730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:39.341748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.651 [2024-07-10 23:39:39.341759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.651 [2024-07-10 23:39:39.341777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.651 [2024-07-10 23:39:39.341787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.341809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.652 [2024-07-10 23:39:39.341819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.341835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.652 [2024-07-10 23:39:39.341846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.341862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.341873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.341889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.341899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.341916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.652 [2024-07-10 23:39:39.341925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.341942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.652 [2024-07-10 23:39:39.341952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.341969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.652 [2024-07-10 23:39:39.341979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.341997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.652 [2024-07-10 23:39:39.342007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.342024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.652 [2024-07-10 23:39:39.342034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.342052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.652 [2024-07-10 23:39:39.342061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.342078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.652 [2024-07-10 23:39:39.342088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.652 [2024-07-10 23:39:39.345877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:33.652 [2024-07-10 23:39:39.345896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.653 [2024-07-10 23:39:39.345906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:33.653 [2024-07-10 23:39:39.345924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.653 [2024-07-10 23:39:39.345934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:33.653 Received shutdown signal, test time was about 27.006798 seconds 00:35:33.653 00:35:33.653 Latency(us) 00:35:33.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:33.653 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:33.653 Verification LBA range: start 0x0 length 0x4000 00:35:33.653 Nvme0n1 : 27.01 8925.76 34.87 0.00 0.00 14318.20 705.22 3019898.88 00:35:33.653 =================================================================================================================== 00:35:33.653 Total : 8925.76 34.87 0.00 0.00 14318.20 705.22 3019898.88 00:35:33.653 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:33.912 rmmod nvme_tcp 00:35:33.912 rmmod nvme_fabrics 00:35:33.912 rmmod nvme_keyring 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2621197 ']' 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2621197 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2621197 ']' 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2621197 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2621197 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2621197' 00:35:33.912 killing process with pid 2621197 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2621197 00:35:33.912 23:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2621197 00:35:35.814 23:39:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:35.814 23:39:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:35.814 23:39:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:35.815 23:39:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:35.815 23:39:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:35.815 23:39:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.815 23:39:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:35.815 23:39:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.720 23:39:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:37.720 00:35:37.720 real 0m41.183s 00:35:37.720 user 1m50.960s 00:35:37.720 sys 0m10.212s 00:35:37.720 23:39:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:37.720 23:39:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:37.720 ************************************ 00:35:37.720 END TEST nvmf_host_multipath_status 00:35:37.720 ************************************ 00:35:37.720 23:39:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:37.720 23:39:46 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:37.720 23:39:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:37.720 23:39:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:37.720 23:39:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:37.720 ************************************ 00:35:37.720 START TEST nvmf_discovery_remove_ifc 00:35:37.720 ************************************ 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:37.720 * Looking for test storage... 00:35:37.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.720 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:37.721 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.721 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:37.721 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:37.721 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:35:37.721 23:39:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:43.020 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:43.020 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:35:43.020 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:43.020 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:43.020 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:43.020 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:43.020 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:43.020 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:35:43.020 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:43.020 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:35:43.020 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:35:43.020 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:43.021 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:43.021 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:43.021 Found net devices under 0000:86:00.0: cvl_0_0 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:43.021 Found net devices under 0000:86:00.1: cvl_0_1 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:43.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:43.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:35:43.021 00:35:43.021 --- 10.0.0.2 ping statistics --- 00:35:43.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.021 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:43.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:43.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:35:43.021 00:35:43.021 --- 10.0.0.1 ping statistics --- 00:35:43.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.021 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2630202 00:35:43.021 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2630202 00:35:43.022 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2630202 ']' 00:35:43.022 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:43.022 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:43.022 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:43.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:43.022 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:43.022 23:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:43.022 [2024-07-10 23:39:51.999008] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:35:43.022 [2024-07-10 23:39:51.999096] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:43.022 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.281 [2024-07-10 23:39:52.107575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.281 [2024-07-10 23:39:52.318542] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:43.281 [2024-07-10 23:39:52.318582] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:43.281 [2024-07-10 23:39:52.318593] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:43.281 [2024-07-10 23:39:52.318604] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:43.281 [2024-07-10 23:39:52.318612] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:43.281 [2024-07-10 23:39:52.318643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:43.850 [2024-07-10 23:39:52.810037] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:43.850 [2024-07-10 23:39:52.818181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:43.850 null0 00:35:43.850 [2024-07-10 23:39:52.850180] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2630252 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2630252 /tmp/host.sock 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2630252 ']' 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:43.850 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:43.850 23:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:44.110 [2024-07-10 23:39:52.946260] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:35:44.110 [2024-07-10 23:39:52.946346] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630252 ] 00:35:44.110 EAL: No free 2048 kB hugepages reported on node 1 00:35:44.110 [2024-07-10 23:39:53.049588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.369 [2024-07-10 23:39:53.272153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.938 23:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:44.938 23:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:35:44.938 23:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:44.938 23:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:44.938 23:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.938 23:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:44.938 23:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.938 23:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:44.938 23:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.938 23:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:45.197 23:39:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.197 23:39:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:45.197 23:39:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.197 23:39:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:46.132 [2024-07-10 23:39:55.160359] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:46.132 [2024-07-10 23:39:55.160389] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:46.133 [2024-07-10 23:39:55.160418] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:46.392 [2024-07-10 23:39:55.246693] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:46.392 [2024-07-10 23:39:55.433605] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:46.392 [2024-07-10 23:39:55.433661] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:46.392 [2024-07-10 23:39:55.433715] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:46.392 [2024-07-10 23:39:55.433739] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:46.392 [2024-07-10 23:39:55.433765] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:46.392 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.392 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:46.392 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:46.392 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:46.392 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:46.392 [2024-07-10 23:39:55.439771] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500032d500 was disconnected and freed. delete nvme_qpair. 00:35:46.392 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.392 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:46.392 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:46.392 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:46.651 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.651 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:46.651 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:46.651 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:46.651 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:46.651 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:46.651 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:46.651 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:46.651 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.651 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:46.651 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:46.651 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:46.651 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.651 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:46.651 23:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:47.628 23:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:47.628 23:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:47.628 23:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:47.628 23:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:47.628 23:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.628 23:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:47.628 23:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:47.628 23:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.628 23:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:47.628 23:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:49.002 23:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:49.002 23:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:49.002 23:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:49.002 23:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.002 23:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:49.002 23:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:49.002 23:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:49.002 23:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.002 23:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:49.002 23:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:49.938 23:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:49.938 23:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:49.938 23:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:49.938 23:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:49.938 23:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.938 23:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:49.938 23:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:49.938 23:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.938 23:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:49.938 23:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:50.871 23:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:50.871 23:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:50.871 23:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:50.871 23:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:50.871 23:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.871 23:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:50.871 23:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:50.871 23:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.871 23:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:50.871 23:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:51.803 23:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:51.803 23:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:51.803 23:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:51.803 23:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:51.803 23:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.803 23:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:51.803 23:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:51.803 23:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.062 [2024-07-10 23:40:00.874891] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:52.062 [2024-07-10 23:40:00.874946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:52.062 [2024-07-10 23:40:00.874962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.062 [2024-07-10 23:40:00.874976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:52.062 [2024-07-10 23:40:00.874985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.062 [2024-07-10 23:40:00.874995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:52.062 [2024-07-10 23:40:00.875005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.062 [2024-07-10 23:40:00.875015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:52.062 [2024-07-10 23:40:00.875024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.062 [2024-07-10 23:40:00.875034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:52.062 [2024-07-10 23:40:00.875048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:52.062 [2024-07-10 23:40:00.875058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:35:52.062 [2024-07-10 23:40:00.884902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:35:52.062 23:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:52.062 23:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:52.062 [2024-07-10 23:40:00.894943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:52.995 23:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:52.995 23:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:52.995 23:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:52.995 23:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.995 23:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:52.995 23:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:52.995 23:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:52.995 [2024-07-10 23:40:01.932190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:52.995 [2024-07-10 23:40:01.932253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:35:52.995 [2024-07-10 23:40:01.932276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:35:52.995 [2024-07-10 23:40:01.932319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:35:52.995 [2024-07-10 23:40:01.932936] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:52.995 [2024-07-10 23:40:01.932971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:52.995 [2024-07-10 23:40:01.932993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:52.995 [2024-07-10 23:40:01.933010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:52.995 [2024-07-10 23:40:01.933041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:52.995 [2024-07-10 23:40:01.933057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:52.995 23:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.995 23:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:52.995 23:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:53.927 [2024-07-10 23:40:02.935565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:53.927 [2024-07-10 23:40:02.935597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:53.927 [2024-07-10 23:40:02.935608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:53.927 [2024-07-10 23:40:02.935618] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:35:53.927 [2024-07-10 23:40:02.935637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.927 [2024-07-10 23:40:02.935666] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:53.927 [2024-07-10 23:40:02.935701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.927 [2024-07-10 23:40:02.935720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.927 [2024-07-10 23:40:02.935735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.927 [2024-07-10 23:40:02.935744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.927 [2024-07-10 23:40:02.935756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.927 [2024-07-10 23:40:02.935766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.927 [2024-07-10 23:40:02.935777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.927 [2024-07-10 23:40:02.935786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.927 [2024-07-10 23:40:02.935797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.927 [2024-07-10 23:40:02.935807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.927 [2024-07-10 23:40:02.935817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:35:53.927 [2024-07-10 23:40:02.935895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:35:53.927 [2024-07-10 23:40:02.936981] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:53.927 [2024-07-10 23:40:02.937003] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:35:53.927 23:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:53.927 23:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:53.927 23:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:53.927 23:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.927 23:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:53.927 23:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:53.927 23:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:53.927 23:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.185 23:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:54.185 23:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:54.185 23:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:54.185 23:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:54.185 23:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:54.185 23:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:54.185 23:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:54.185 23:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.185 23:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:54.185 23:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:54.185 23:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:54.185 23:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.185 23:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:54.185 23:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:55.118 23:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:55.118 23:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:55.118 23:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:55.118 23:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:55.118 23:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.118 23:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:55.118 23:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:55.118 23:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.118 23:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:55.118 23:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:56.050 [2024-07-10 23:40:04.996348] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:56.050 [2024-07-10 23:40:04.996375] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:56.050 [2024-07-10 23:40:04.996401] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:56.050 [2024-07-10 23:40:05.084689] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:56.308 [2024-07-10 23:40:05.148110] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:56.308 [2024-07-10 23:40:05.148156] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:56.308 [2024-07-10 23:40:05.148218] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:56.308 [2024-07-10 23:40:05.148237] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:56.308 [2024-07-10 23:40:05.148248] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:56.308 [2024-07-10 23:40:05.196183] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500032dc80 was disconnected and freed. delete nvme_qpair. 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2630252 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2630252 ']' 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2630252 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2630252 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2630252' 00:35:56.308 killing process with pid 2630252 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2630252 00:35:56.308 23:40:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2630252 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:57.679 rmmod nvme_tcp 00:35:57.679 rmmod nvme_fabrics 00:35:57.679 rmmod nvme_keyring 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2630202 ']' 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2630202 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2630202 ']' 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2630202 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2630202 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2630202' 00:35:57.679 killing process with pid 2630202 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2630202 00:35:57.679 23:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2630202 00:35:59.051 23:40:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:59.051 23:40:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:59.051 23:40:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:59.051 23:40:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:59.051 23:40:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:59.051 23:40:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.051 23:40:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:59.051 23:40:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.952 23:40:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:00.952 00:36:00.952 real 0m23.231s 00:36:00.952 user 0m29.953s 00:36:00.952 sys 0m5.402s 00:36:00.952 23:40:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:00.952 23:40:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:00.952 ************************************ 00:36:00.952 END TEST nvmf_discovery_remove_ifc 00:36:00.952 ************************************ 00:36:00.952 23:40:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:00.952 23:40:09 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:00.952 23:40:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:00.952 23:40:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:00.952 23:40:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:00.952 ************************************ 00:36:00.952 START TEST nvmf_identify_kernel_target 00:36:00.952 ************************************ 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:00.952 * Looking for test storage... 00:36:00.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:36:00.952 23:40:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:06.220 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:06.221 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:06.221 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:06.221 Found net devices under 0000:86:00.0: cvl_0_0 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:06.221 Found net devices under 0000:86:00.1: cvl_0_1 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:06.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:06.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:36:06.221 00:36:06.221 --- 10.0.0.2 ping statistics --- 00:36:06.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.221 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:06.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:06.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:36:06.221 00:36:06.221 --- 10.0.0.1 ping statistics --- 00:36:06.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.221 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:06.221 23:40:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:08.125 Waiting for block devices as requested 00:36:08.383 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:08.383 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:08.383 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:08.641 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:08.641 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:08.641 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:08.641 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:08.900 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:08.900 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:08.900 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:08.900 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:09.159 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:09.159 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:09.159 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:09.160 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:09.418 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:09.418 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:09.418 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:09.418 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:09.418 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:09.418 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:09.418 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:09.418 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:09.418 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:09.418 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:09.418 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:09.677 No valid GPT data, bailing 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:09.677 00:36:09.677 Discovery Log Number of Records 2, Generation counter 2 00:36:09.677 =====Discovery Log Entry 0====== 00:36:09.677 trtype: tcp 00:36:09.677 adrfam: ipv4 00:36:09.677 subtype: current discovery subsystem 00:36:09.677 treq: not specified, sq flow control disable supported 00:36:09.677 portid: 1 00:36:09.677 trsvcid: 4420 00:36:09.677 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:09.677 traddr: 10.0.0.1 00:36:09.677 eflags: none 00:36:09.677 sectype: none 00:36:09.677 =====Discovery Log Entry 1====== 00:36:09.677 trtype: tcp 00:36:09.677 adrfam: ipv4 00:36:09.677 subtype: nvme subsystem 00:36:09.677 treq: not specified, sq flow control disable supported 00:36:09.677 portid: 1 00:36:09.677 trsvcid: 4420 00:36:09.677 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:09.677 traddr: 10.0.0.1 00:36:09.677 eflags: none 00:36:09.677 sectype: none 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:36:09.677 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:36:09.677 EAL: No free 2048 kB hugepages reported on node 1 00:36:09.677 ===================================================== 00:36:09.677 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:09.677 ===================================================== 00:36:09.677 Controller Capabilities/Features 00:36:09.677 ================================ 00:36:09.677 Vendor ID: 0000 00:36:09.677 Subsystem Vendor ID: 0000 00:36:09.677 Serial Number: ea18e2f072b0a0a7e33a 00:36:09.677 Model Number: Linux 00:36:09.677 Firmware Version: 6.7.0-68 00:36:09.677 Recommended Arb Burst: 0 00:36:09.677 IEEE OUI Identifier: 00 00 00 00:36:09.677 Multi-path I/O 00:36:09.677 May have multiple subsystem ports: No 00:36:09.677 May have multiple controllers: No 00:36:09.677 Associated with SR-IOV VF: No 00:36:09.677 Max Data Transfer Size: Unlimited 00:36:09.677 Max Number of Namespaces: 0 00:36:09.677 Max Number of I/O Queues: 1024 00:36:09.677 NVMe Specification Version (VS): 1.3 00:36:09.677 NVMe Specification Version (Identify): 1.3 00:36:09.677 Maximum Queue Entries: 1024 00:36:09.677 Contiguous Queues Required: No 00:36:09.677 Arbitration Mechanisms Supported 00:36:09.677 Weighted Round Robin: Not Supported 00:36:09.677 Vendor Specific: Not Supported 00:36:09.677 Reset Timeout: 7500 ms 00:36:09.677 Doorbell Stride: 4 bytes 00:36:09.677 NVM Subsystem Reset: Not Supported 00:36:09.677 Command Sets Supported 00:36:09.677 NVM Command Set: Supported 00:36:09.677 Boot Partition: Not Supported 00:36:09.677 Memory Page Size Minimum: 4096 bytes 00:36:09.677 Memory Page Size Maximum: 4096 bytes 00:36:09.677 Persistent Memory Region: Not Supported 00:36:09.677 Optional Asynchronous Events Supported 00:36:09.677 Namespace Attribute Notices: Not Supported 00:36:09.677 Firmware Activation Notices: Not Supported 00:36:09.677 ANA Change Notices: Not Supported 00:36:09.677 PLE Aggregate Log Change Notices: Not Supported 00:36:09.677 LBA Status Info Alert Notices: Not Supported 00:36:09.677 EGE Aggregate Log Change Notices: Not Supported 00:36:09.677 Normal NVM Subsystem Shutdown event: Not Supported 00:36:09.677 Zone Descriptor Change Notices: Not Supported 00:36:09.677 Discovery Log Change Notices: Supported 00:36:09.677 Controller Attributes 00:36:09.677 128-bit Host Identifier: Not Supported 00:36:09.677 Non-Operational Permissive Mode: Not Supported 00:36:09.677 NVM Sets: Not Supported 00:36:09.677 Read Recovery Levels: Not Supported 00:36:09.677 Endurance Groups: Not Supported 00:36:09.677 Predictable Latency Mode: Not Supported 00:36:09.677 Traffic Based Keep ALive: Not Supported 00:36:09.677 Namespace Granularity: Not Supported 00:36:09.677 SQ Associations: Not Supported 00:36:09.677 UUID List: Not Supported 00:36:09.677 Multi-Domain Subsystem: Not Supported 00:36:09.677 Fixed Capacity Management: Not Supported 00:36:09.677 Variable Capacity Management: Not Supported 00:36:09.677 Delete Endurance Group: Not Supported 00:36:09.677 Delete NVM Set: Not Supported 00:36:09.677 Extended LBA Formats Supported: Not Supported 00:36:09.677 Flexible Data Placement Supported: Not Supported 00:36:09.677 00:36:09.677 Controller Memory Buffer Support 00:36:09.677 ================================ 00:36:09.677 Supported: No 00:36:09.677 00:36:09.677 Persistent Memory Region Support 00:36:09.677 ================================ 00:36:09.677 Supported: No 00:36:09.677 00:36:09.677 Admin Command Set Attributes 00:36:09.677 ============================ 00:36:09.677 Security Send/Receive: Not Supported 00:36:09.677 Format NVM: Not Supported 00:36:09.677 Firmware Activate/Download: Not Supported 00:36:09.677 Namespace Management: Not Supported 00:36:09.677 Device Self-Test: Not Supported 00:36:09.677 Directives: Not Supported 00:36:09.677 NVMe-MI: Not Supported 00:36:09.677 Virtualization Management: Not Supported 00:36:09.677 Doorbell Buffer Config: Not Supported 00:36:09.677 Get LBA Status Capability: Not Supported 00:36:09.677 Command & Feature Lockdown Capability: Not Supported 00:36:09.677 Abort Command Limit: 1 00:36:09.677 Async Event Request Limit: 1 00:36:09.677 Number of Firmware Slots: N/A 00:36:09.677 Firmware Slot 1 Read-Only: N/A 00:36:09.677 Firmware Activation Without Reset: N/A 00:36:09.677 Multiple Update Detection Support: N/A 00:36:09.677 Firmware Update Granularity: No Information Provided 00:36:09.677 Per-Namespace SMART Log: No 00:36:09.677 Asymmetric Namespace Access Log Page: Not Supported 00:36:09.677 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:09.677 Command Effects Log Page: Not Supported 00:36:09.677 Get Log Page Extended Data: Supported 00:36:09.677 Telemetry Log Pages: Not Supported 00:36:09.677 Persistent Event Log Pages: Not Supported 00:36:09.677 Supported Log Pages Log Page: May Support 00:36:09.677 Commands Supported & Effects Log Page: Not Supported 00:36:09.677 Feature Identifiers & Effects Log Page:May Support 00:36:09.677 NVMe-MI Commands & Effects Log Page: May Support 00:36:09.677 Data Area 4 for Telemetry Log: Not Supported 00:36:09.677 Error Log Page Entries Supported: 1 00:36:09.677 Keep Alive: Not Supported 00:36:09.677 00:36:09.677 NVM Command Set Attributes 00:36:09.677 ========================== 00:36:09.677 Submission Queue Entry Size 00:36:09.677 Max: 1 00:36:09.677 Min: 1 00:36:09.677 Completion Queue Entry Size 00:36:09.677 Max: 1 00:36:09.677 Min: 1 00:36:09.677 Number of Namespaces: 0 00:36:09.677 Compare Command: Not Supported 00:36:09.677 Write Uncorrectable Command: Not Supported 00:36:09.677 Dataset Management Command: Not Supported 00:36:09.677 Write Zeroes Command: Not Supported 00:36:09.677 Set Features Save Field: Not Supported 00:36:09.677 Reservations: Not Supported 00:36:09.677 Timestamp: Not Supported 00:36:09.677 Copy: Not Supported 00:36:09.677 Volatile Write Cache: Not Present 00:36:09.677 Atomic Write Unit (Normal): 1 00:36:09.677 Atomic Write Unit (PFail): 1 00:36:09.677 Atomic Compare & Write Unit: 1 00:36:09.677 Fused Compare & Write: Not Supported 00:36:09.677 Scatter-Gather List 00:36:09.677 SGL Command Set: Supported 00:36:09.677 SGL Keyed: Not Supported 00:36:09.677 SGL Bit Bucket Descriptor: Not Supported 00:36:09.677 SGL Metadata Pointer: Not Supported 00:36:09.677 Oversized SGL: Not Supported 00:36:09.677 SGL Metadata Address: Not Supported 00:36:09.677 SGL Offset: Supported 00:36:09.677 Transport SGL Data Block: Not Supported 00:36:09.677 Replay Protected Memory Block: Not Supported 00:36:09.677 00:36:09.677 Firmware Slot Information 00:36:09.677 ========================= 00:36:09.677 Active slot: 0 00:36:09.677 00:36:09.677 00:36:09.677 Error Log 00:36:09.677 ========= 00:36:09.677 00:36:09.677 Active Namespaces 00:36:09.677 ================= 00:36:09.677 Discovery Log Page 00:36:09.677 ================== 00:36:09.677 Generation Counter: 2 00:36:09.677 Number of Records: 2 00:36:09.677 Record Format: 0 00:36:09.677 00:36:09.677 Discovery Log Entry 0 00:36:09.677 ---------------------- 00:36:09.677 Transport Type: 3 (TCP) 00:36:09.677 Address Family: 1 (IPv4) 00:36:09.677 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:09.677 Entry Flags: 00:36:09.677 Duplicate Returned Information: 0 00:36:09.677 Explicit Persistent Connection Support for Discovery: 0 00:36:09.677 Transport Requirements: 00:36:09.677 Secure Channel: Not Specified 00:36:09.677 Port ID: 1 (0x0001) 00:36:09.677 Controller ID: 65535 (0xffff) 00:36:09.677 Admin Max SQ Size: 32 00:36:09.677 Transport Service Identifier: 4420 00:36:09.677 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:09.677 Transport Address: 10.0.0.1 00:36:09.677 Discovery Log Entry 1 00:36:09.677 ---------------------- 00:36:09.677 Transport Type: 3 (TCP) 00:36:09.677 Address Family: 1 (IPv4) 00:36:09.677 Subsystem Type: 2 (NVM Subsystem) 00:36:09.677 Entry Flags: 00:36:09.677 Duplicate Returned Information: 0 00:36:09.677 Explicit Persistent Connection Support for Discovery: 0 00:36:09.677 Transport Requirements: 00:36:09.677 Secure Channel: Not Specified 00:36:09.677 Port ID: 1 (0x0001) 00:36:09.677 Controller ID: 65535 (0xffff) 00:36:09.677 Admin Max SQ Size: 32 00:36:09.677 Transport Service Identifier: 4420 00:36:09.677 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:36:09.677 Transport Address: 10.0.0.1 00:36:09.677 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:09.937 EAL: No free 2048 kB hugepages reported on node 1 00:36:09.937 get_feature(0x01) failed 00:36:09.937 get_feature(0x02) failed 00:36:09.937 get_feature(0x04) failed 00:36:09.937 ===================================================== 00:36:09.937 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:09.937 ===================================================== 00:36:09.937 Controller Capabilities/Features 00:36:09.937 ================================ 00:36:09.937 Vendor ID: 0000 00:36:09.937 Subsystem Vendor ID: 0000 00:36:09.937 Serial Number: 7a0d8097ccaeeb706f24 00:36:09.937 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:36:09.937 Firmware Version: 6.7.0-68 00:36:09.937 Recommended Arb Burst: 6 00:36:09.937 IEEE OUI Identifier: 00 00 00 00:36:09.937 Multi-path I/O 00:36:09.937 May have multiple subsystem ports: Yes 00:36:09.937 May have multiple controllers: Yes 00:36:09.937 Associated with SR-IOV VF: No 00:36:09.937 Max Data Transfer Size: Unlimited 00:36:09.937 Max Number of Namespaces: 1024 00:36:09.937 Max Number of I/O Queues: 128 00:36:09.937 NVMe Specification Version (VS): 1.3 00:36:09.937 NVMe Specification Version (Identify): 1.3 00:36:09.937 Maximum Queue Entries: 1024 00:36:09.937 Contiguous Queues Required: No 00:36:09.937 Arbitration Mechanisms Supported 00:36:09.937 Weighted Round Robin: Not Supported 00:36:09.937 Vendor Specific: Not Supported 00:36:09.937 Reset Timeout: 7500 ms 00:36:09.937 Doorbell Stride: 4 bytes 00:36:09.937 NVM Subsystem Reset: Not Supported 00:36:09.937 Command Sets Supported 00:36:09.937 NVM Command Set: Supported 00:36:09.937 Boot Partition: Not Supported 00:36:09.937 Memory Page Size Minimum: 4096 bytes 00:36:09.937 Memory Page Size Maximum: 4096 bytes 00:36:09.937 Persistent Memory Region: Not Supported 00:36:09.937 Optional Asynchronous Events Supported 00:36:09.937 Namespace Attribute Notices: Supported 00:36:09.937 Firmware Activation Notices: Not Supported 00:36:09.937 ANA Change Notices: Supported 00:36:09.937 PLE Aggregate Log Change Notices: Not Supported 00:36:09.937 LBA Status Info Alert Notices: Not Supported 00:36:09.937 EGE Aggregate Log Change Notices: Not Supported 00:36:09.937 Normal NVM Subsystem Shutdown event: Not Supported 00:36:09.937 Zone Descriptor Change Notices: Not Supported 00:36:09.937 Discovery Log Change Notices: Not Supported 00:36:09.937 Controller Attributes 00:36:09.937 128-bit Host Identifier: Supported 00:36:09.937 Non-Operational Permissive Mode: Not Supported 00:36:09.937 NVM Sets: Not Supported 00:36:09.937 Read Recovery Levels: Not Supported 00:36:09.937 Endurance Groups: Not Supported 00:36:09.937 Predictable Latency Mode: Not Supported 00:36:09.937 Traffic Based Keep ALive: Supported 00:36:09.937 Namespace Granularity: Not Supported 00:36:09.937 SQ Associations: Not Supported 00:36:09.937 UUID List: Not Supported 00:36:09.937 Multi-Domain Subsystem: Not Supported 00:36:09.937 Fixed Capacity Management: Not Supported 00:36:09.937 Variable Capacity Management: Not Supported 00:36:09.937 Delete Endurance Group: Not Supported 00:36:09.937 Delete NVM Set: Not Supported 00:36:09.937 Extended LBA Formats Supported: Not Supported 00:36:09.937 Flexible Data Placement Supported: Not Supported 00:36:09.937 00:36:09.937 Controller Memory Buffer Support 00:36:09.937 ================================ 00:36:09.937 Supported: No 00:36:09.937 00:36:09.937 Persistent Memory Region Support 00:36:09.937 ================================ 00:36:09.937 Supported: No 00:36:09.937 00:36:09.937 Admin Command Set Attributes 00:36:09.937 ============================ 00:36:09.937 Security Send/Receive: Not Supported 00:36:09.937 Format NVM: Not Supported 00:36:09.937 Firmware Activate/Download: Not Supported 00:36:09.937 Namespace Management: Not Supported 00:36:09.937 Device Self-Test: Not Supported 00:36:09.937 Directives: Not Supported 00:36:09.937 NVMe-MI: Not Supported 00:36:09.937 Virtualization Management: Not Supported 00:36:09.937 Doorbell Buffer Config: Not Supported 00:36:09.937 Get LBA Status Capability: Not Supported 00:36:09.937 Command & Feature Lockdown Capability: Not Supported 00:36:09.937 Abort Command Limit: 4 00:36:09.937 Async Event Request Limit: 4 00:36:09.937 Number of Firmware Slots: N/A 00:36:09.937 Firmware Slot 1 Read-Only: N/A 00:36:09.937 Firmware Activation Without Reset: N/A 00:36:09.937 Multiple Update Detection Support: N/A 00:36:09.937 Firmware Update Granularity: No Information Provided 00:36:09.937 Per-Namespace SMART Log: Yes 00:36:09.937 Asymmetric Namespace Access Log Page: Supported 00:36:09.937 ANA Transition Time : 10 sec 00:36:09.937 00:36:09.937 Asymmetric Namespace Access Capabilities 00:36:09.937 ANA Optimized State : Supported 00:36:09.937 ANA Non-Optimized State : Supported 00:36:09.937 ANA Inaccessible State : Supported 00:36:09.937 ANA Persistent Loss State : Supported 00:36:09.937 ANA Change State : Supported 00:36:09.937 ANAGRPID is not changed : No 00:36:09.937 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:36:09.937 00:36:09.937 ANA Group Identifier Maximum : 128 00:36:09.937 Number of ANA Group Identifiers : 128 00:36:09.937 Max Number of Allowed Namespaces : 1024 00:36:09.937 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:36:09.937 Command Effects Log Page: Supported 00:36:09.937 Get Log Page Extended Data: Supported 00:36:09.937 Telemetry Log Pages: Not Supported 00:36:09.937 Persistent Event Log Pages: Not Supported 00:36:09.937 Supported Log Pages Log Page: May Support 00:36:09.937 Commands Supported & Effects Log Page: Not Supported 00:36:09.937 Feature Identifiers & Effects Log Page:May Support 00:36:09.937 NVMe-MI Commands & Effects Log Page: May Support 00:36:09.937 Data Area 4 for Telemetry Log: Not Supported 00:36:09.937 Error Log Page Entries Supported: 128 00:36:09.937 Keep Alive: Supported 00:36:09.937 Keep Alive Granularity: 1000 ms 00:36:09.937 00:36:09.937 NVM Command Set Attributes 00:36:09.937 ========================== 00:36:09.937 Submission Queue Entry Size 00:36:09.937 Max: 64 00:36:09.937 Min: 64 00:36:09.937 Completion Queue Entry Size 00:36:09.937 Max: 16 00:36:09.937 Min: 16 00:36:09.937 Number of Namespaces: 1024 00:36:09.937 Compare Command: Not Supported 00:36:09.937 Write Uncorrectable Command: Not Supported 00:36:09.937 Dataset Management Command: Supported 00:36:09.937 Write Zeroes Command: Supported 00:36:09.937 Set Features Save Field: Not Supported 00:36:09.937 Reservations: Not Supported 00:36:09.937 Timestamp: Not Supported 00:36:09.937 Copy: Not Supported 00:36:09.937 Volatile Write Cache: Present 00:36:09.937 Atomic Write Unit (Normal): 1 00:36:09.937 Atomic Write Unit (PFail): 1 00:36:09.937 Atomic Compare & Write Unit: 1 00:36:09.937 Fused Compare & Write: Not Supported 00:36:09.937 Scatter-Gather List 00:36:09.937 SGL Command Set: Supported 00:36:09.937 SGL Keyed: Not Supported 00:36:09.937 SGL Bit Bucket Descriptor: Not Supported 00:36:09.937 SGL Metadata Pointer: Not Supported 00:36:09.937 Oversized SGL: Not Supported 00:36:09.937 SGL Metadata Address: Not Supported 00:36:09.937 SGL Offset: Supported 00:36:09.937 Transport SGL Data Block: Not Supported 00:36:09.937 Replay Protected Memory Block: Not Supported 00:36:09.937 00:36:09.937 Firmware Slot Information 00:36:09.937 ========================= 00:36:09.937 Active slot: 0 00:36:09.937 00:36:09.937 Asymmetric Namespace Access 00:36:09.937 =========================== 00:36:09.937 Change Count : 0 00:36:09.937 Number of ANA Group Descriptors : 1 00:36:09.937 ANA Group Descriptor : 0 00:36:09.937 ANA Group ID : 1 00:36:09.937 Number of NSID Values : 1 00:36:09.937 Change Count : 0 00:36:09.937 ANA State : 1 00:36:09.937 Namespace Identifier : 1 00:36:09.937 00:36:09.937 Commands Supported and Effects 00:36:09.937 ============================== 00:36:09.937 Admin Commands 00:36:09.937 -------------- 00:36:09.937 Get Log Page (02h): Supported 00:36:09.937 Identify (06h): Supported 00:36:09.937 Abort (08h): Supported 00:36:09.937 Set Features (09h): Supported 00:36:09.937 Get Features (0Ah): Supported 00:36:09.937 Asynchronous Event Request (0Ch): Supported 00:36:09.937 Keep Alive (18h): Supported 00:36:09.937 I/O Commands 00:36:09.937 ------------ 00:36:09.937 Flush (00h): Supported 00:36:09.937 Write (01h): Supported LBA-Change 00:36:09.938 Read (02h): Supported 00:36:09.938 Write Zeroes (08h): Supported LBA-Change 00:36:09.938 Dataset Management (09h): Supported 00:36:09.938 00:36:09.938 Error Log 00:36:09.938 ========= 00:36:09.938 Entry: 0 00:36:09.938 Error Count: 0x3 00:36:09.938 Submission Queue Id: 0x0 00:36:09.938 Command Id: 0x5 00:36:09.938 Phase Bit: 0 00:36:09.938 Status Code: 0x2 00:36:09.938 Status Code Type: 0x0 00:36:09.938 Do Not Retry: 1 00:36:09.938 Error Location: 0x28 00:36:09.938 LBA: 0x0 00:36:09.938 Namespace: 0x0 00:36:09.938 Vendor Log Page: 0x0 00:36:09.938 ----------- 00:36:09.938 Entry: 1 00:36:09.938 Error Count: 0x2 00:36:09.938 Submission Queue Id: 0x0 00:36:09.938 Command Id: 0x5 00:36:09.938 Phase Bit: 0 00:36:09.938 Status Code: 0x2 00:36:09.938 Status Code Type: 0x0 00:36:09.938 Do Not Retry: 1 00:36:09.938 Error Location: 0x28 00:36:09.938 LBA: 0x0 00:36:09.938 Namespace: 0x0 00:36:09.938 Vendor Log Page: 0x0 00:36:09.938 ----------- 00:36:09.938 Entry: 2 00:36:09.938 Error Count: 0x1 00:36:09.938 Submission Queue Id: 0x0 00:36:09.938 Command Id: 0x4 00:36:09.938 Phase Bit: 0 00:36:09.938 Status Code: 0x2 00:36:09.938 Status Code Type: 0x0 00:36:09.938 Do Not Retry: 1 00:36:09.938 Error Location: 0x28 00:36:09.938 LBA: 0x0 00:36:09.938 Namespace: 0x0 00:36:09.938 Vendor Log Page: 0x0 00:36:09.938 00:36:09.938 Number of Queues 00:36:09.938 ================ 00:36:09.938 Number of I/O Submission Queues: 128 00:36:09.938 Number of I/O Completion Queues: 128 00:36:09.938 00:36:09.938 ZNS Specific Controller Data 00:36:09.938 ============================ 00:36:09.938 Zone Append Size Limit: 0 00:36:09.938 00:36:09.938 00:36:09.938 Active Namespaces 00:36:09.938 ================= 00:36:09.938 get_feature(0x05) failed 00:36:09.938 Namespace ID:1 00:36:09.938 Command Set Identifier: NVM (00h) 00:36:09.938 Deallocate: Supported 00:36:09.938 Deallocated/Unwritten Error: Not Supported 00:36:09.938 Deallocated Read Value: Unknown 00:36:09.938 Deallocate in Write Zeroes: Not Supported 00:36:09.938 Deallocated Guard Field: 0xFFFF 00:36:09.938 Flush: Supported 00:36:09.938 Reservation: Not Supported 00:36:09.938 Namespace Sharing Capabilities: Multiple Controllers 00:36:09.938 Size (in LBAs): 1953525168 (931GiB) 00:36:09.938 Capacity (in LBAs): 1953525168 (931GiB) 00:36:09.938 Utilization (in LBAs): 1953525168 (931GiB) 00:36:09.938 UUID: d56fd931-0736-469e-8488-088d21f34ce1 00:36:09.938 Thin Provisioning: Not Supported 00:36:09.938 Per-NS Atomic Units: Yes 00:36:09.938 Atomic Boundary Size (Normal): 0 00:36:09.938 Atomic Boundary Size (PFail): 0 00:36:09.938 Atomic Boundary Offset: 0 00:36:09.938 NGUID/EUI64 Never Reused: No 00:36:09.938 ANA group ID: 1 00:36:09.938 Namespace Write Protected: No 00:36:09.938 Number of LBA Formats: 1 00:36:09.938 Current LBA Format: LBA Format #00 00:36:09.938 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:09.938 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:09.938 rmmod nvme_tcp 00:36:09.938 rmmod nvme_fabrics 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:09.938 23:40:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.552 23:40:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:12.552 23:40:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:36:12.552 23:40:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:12.552 23:40:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:36:12.552 23:40:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:12.552 23:40:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:12.552 23:40:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:12.552 23:40:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:12.552 23:40:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:12.552 23:40:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:12.552 23:40:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:14.453 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:14.453 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:14.453 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:14.453 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:14.453 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:14.453 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:14.453 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:14.453 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:14.453 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:14.453 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:14.453 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:14.453 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:14.453 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:14.453 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:14.453 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:14.453 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:15.385 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:15.385 00:36:15.385 real 0m14.487s 00:36:15.385 user 0m3.400s 00:36:15.385 sys 0m7.333s 00:36:15.385 23:40:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:15.385 23:40:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:15.385 ************************************ 00:36:15.385 END TEST nvmf_identify_kernel_target 00:36:15.385 ************************************ 00:36:15.385 23:40:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:15.385 23:40:24 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:15.385 23:40:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:15.385 23:40:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:15.385 23:40:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:15.385 ************************************ 00:36:15.385 START TEST nvmf_auth_host 00:36:15.385 ************************************ 00:36:15.385 23:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:15.642 * Looking for test storage... 00:36:15.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:15.642 23:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.642 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:36:15.642 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:36:15.643 23:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:20.913 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:20.913 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:20.913 Found net devices under 0000:86:00.0: cvl_0_0 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:20.913 Found net devices under 0000:86:00.1: cvl_0_1 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:20.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:20.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:36:20.913 00:36:20.913 --- 10.0.0.2 ping statistics --- 00:36:20.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:20.913 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:20.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:20.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:36:20.913 00:36:20.913 --- 10.0.0.1 ping statistics --- 00:36:20.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:20.913 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:20.913 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:20.914 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:20.914 23:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:36:20.914 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:20.914 23:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:20.914 23:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.914 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2642213 00:36:20.914 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2642213 00:36:20.914 23:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:36:20.914 23:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2642213 ']' 00:36:20.914 23:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:20.914 23:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:20.914 23:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:20.914 23:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:20.914 23:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7c34255c884b3fe88edb089d23743852 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fEq 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7c34255c884b3fe88edb089d23743852 0 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7c34255c884b3fe88edb089d23743852 0 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7c34255c884b3fe88edb089d23743852 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fEq 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fEq 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.fEq 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fff111db7e9e53b166f264c1e11c7113011bbd728fad2d5da77939fe496cba01 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Bjd 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fff111db7e9e53b166f264c1e11c7113011bbd728fad2d5da77939fe496cba01 3 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fff111db7e9e53b166f264c1e11c7113011bbd728fad2d5da77939fe496cba01 3 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fff111db7e9e53b166f264c1e11c7113011bbd728fad2d5da77939fe496cba01 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:36:21.483 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Bjd 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Bjd 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Bjd 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=09ec98ce793156a555f715cb8bbbbbb72919429c9654deb2 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.wOQ 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 09ec98ce793156a555f715cb8bbbbbb72919429c9654deb2 0 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 09ec98ce793156a555f715cb8bbbbbb72919429c9654deb2 0 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=09ec98ce793156a555f715cb8bbbbbb72919429c9654deb2 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.wOQ 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.wOQ 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.wOQ 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7f71fb46eb3eae016dc1bc106c085ced05f343c905785549 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.0EJ 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7f71fb46eb3eae016dc1bc106c085ced05f343c905785549 2 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7f71fb46eb3eae016dc1bc106c085ced05f343c905785549 2 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7f71fb46eb3eae016dc1bc106c085ced05f343c905785549 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.0EJ 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.0EJ 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.0EJ 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b76418eecab8b76539a1a09606de3bb4 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.SUq 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b76418eecab8b76539a1a09606de3bb4 1 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b76418eecab8b76539a1a09606de3bb4 1 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b76418eecab8b76539a1a09606de3bb4 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.SUq 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.SUq 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.SUq 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ce4713c3c9b2fc5192c4a2d7a3f75a95 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.415 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ce4713c3c9b2fc5192c4a2d7a3f75a95 1 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ce4713c3c9b2fc5192c4a2d7a3f75a95 1 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ce4713c3c9b2fc5192c4a2d7a3f75a95 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:36:21.743 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.415 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.415 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.415 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=241b3b0f679ab09184378da4062f15909c5d960b948bf63b 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.IZS 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 241b3b0f679ab09184378da4062f15909c5d960b948bf63b 2 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 241b3b0f679ab09184378da4062f15909c5d960b948bf63b 2 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=241b3b0f679ab09184378da4062f15909c5d960b948bf63b 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.IZS 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.IZS 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.IZS 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cebe0cd75c05d7b2e56bfe89b7dc67af 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fGi 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cebe0cd75c05d7b2e56bfe89b7dc67af 0 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cebe0cd75c05d7b2e56bfe89b7dc67af 0 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cebe0cd75c05d7b2e56bfe89b7dc67af 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fGi 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fGi 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.fGi 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:22.003 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f5b04aaaa0bf99d6d06c8807c1db87cebc6a71b7ddb3586fbf8ab7019dd26f78 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.5we 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f5b04aaaa0bf99d6d06c8807c1db87cebc6a71b7ddb3586fbf8ab7019dd26f78 3 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f5b04aaaa0bf99d6d06c8807c1db87cebc6a71b7ddb3586fbf8ab7019dd26f78 3 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f5b04aaaa0bf99d6d06c8807c1db87cebc6a71b7ddb3586fbf8ab7019dd26f78 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.5we 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.5we 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.5we 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2642213 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2642213 ']' 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:22.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:22.004 23:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fEq 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Bjd ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Bjd 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.wOQ 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.0EJ ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0EJ 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.SUq 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.415 ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.415 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.IZS 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.fGi ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.fGi 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.5we 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:22.264 23:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:24.800 Waiting for block devices as requested 00:36:24.800 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:25.058 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:25.058 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:25.317 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:25.317 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:25.317 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:25.317 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:25.576 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:25.576 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:25.576 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:25.576 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:25.834 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:25.834 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:25.834 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:26.093 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:26.093 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:26.093 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:26.659 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:26.659 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:26.659 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:26.659 23:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:26.659 23:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:26.659 23:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:26.659 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:26.659 23:40:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:26.659 23:40:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:26.659 No valid GPT data, bailing 00:36:26.659 23:40:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:26.919 00:36:26.919 Discovery Log Number of Records 2, Generation counter 2 00:36:26.919 =====Discovery Log Entry 0====== 00:36:26.919 trtype: tcp 00:36:26.919 adrfam: ipv4 00:36:26.919 subtype: current discovery subsystem 00:36:26.919 treq: not specified, sq flow control disable supported 00:36:26.919 portid: 1 00:36:26.919 trsvcid: 4420 00:36:26.919 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:26.919 traddr: 10.0.0.1 00:36:26.919 eflags: none 00:36:26.919 sectype: none 00:36:26.919 =====Discovery Log Entry 1====== 00:36:26.919 trtype: tcp 00:36:26.919 adrfam: ipv4 00:36:26.919 subtype: nvme subsystem 00:36:26.919 treq: not specified, sq flow control disable supported 00:36:26.919 portid: 1 00:36:26.919 trsvcid: 4420 00:36:26.919 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:26.919 traddr: 10.0.0.1 00:36:26.919 eflags: none 00:36:26.919 sectype: none 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: ]] 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.919 23:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.178 nvme0n1 00:36:27.178 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.178 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.178 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.178 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.178 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: ]] 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.179 nvme0n1 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.179 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: ]] 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.438 nvme0n1 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.438 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.697 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.697 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: ]] 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.698 nvme0n1 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: ]] 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:27.698 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:27.958 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.958 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:27.958 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.958 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.958 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.958 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.958 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:27.958 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:27.958 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:27.958 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.958 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.958 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:27.958 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.958 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:27.958 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.959 nvme0n1 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.959 23:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.959 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.959 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:27.959 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:27.959 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:27.959 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.959 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.959 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:27.959 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.959 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:27.959 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:27.959 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:27.959 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:27.959 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.959 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.218 nvme0n1 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: ]] 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.218 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.478 nvme0n1 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: ]] 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.478 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.737 nvme0n1 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: ]] 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:28.737 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.738 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.996 nvme0n1 00:36:28.996 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.996 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.996 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.996 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.996 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.996 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.996 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.996 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.996 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.996 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.996 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.996 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.996 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:28.996 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.996 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:28.996 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: ]] 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.997 23:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.256 nvme0n1 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:29.256 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.257 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.516 nvme0n1 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: ]] 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.516 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:29.517 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.517 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.517 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.517 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.517 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:29.517 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:29.517 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:29.517 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.517 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.517 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:29.517 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.517 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:29.517 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:29.517 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:29.517 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:29.517 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.517 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.776 nvme0n1 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: ]] 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:29.776 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.777 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:29.777 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:29.777 23:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:29.777 23:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:29.777 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.777 23:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.035 nvme0n1 00:36:30.035 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.035 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.035 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.035 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.035 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.035 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: ]] 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.294 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.553 nvme0n1 00:36:30.553 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.553 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.553 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.553 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: ]] 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.554 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.813 nvme0n1 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.813 23:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.072 nvme0n1 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: ]] 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.073 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.641 nvme0n1 00:36:31.641 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.641 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.641 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.641 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.641 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: ]] 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.642 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.900 nvme0n1 00:36:31.900 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.900 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.900 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.900 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.900 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.900 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.900 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.900 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.900 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.900 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.158 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.158 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.158 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:32.158 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: ]] 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:32.159 23:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:32.159 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:32.159 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.159 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.159 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:32.159 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.159 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:32.159 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:32.159 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:32.159 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:32.159 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.159 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.418 nvme0n1 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: ]] 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.418 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.985 nvme0n1 00:36:32.985 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.985 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.985 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.985 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.985 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.985 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.985 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.985 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.985 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.985 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.985 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.985 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.986 23:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.246 nvme0n1 00:36:33.246 23:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.246 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.246 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.246 23:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.246 23:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: ]] 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.505 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:33.506 23:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.506 23:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.506 23:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.506 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.506 23:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:33.506 23:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:33.506 23:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:33.506 23:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.506 23:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.506 23:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:33.506 23:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.506 23:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:33.506 23:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:33.506 23:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:33.506 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:33.506 23:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.506 23:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.075 nvme0n1 00:36:34.075 23:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.075 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.075 23:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.075 23:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.075 23:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.075 23:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: ]] 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.075 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.643 nvme0n1 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: ]] 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.643 23:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.221 nvme0n1 00:36:35.221 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.221 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.221 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.221 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.221 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.221 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: ]] 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.536 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.104 nvme0n1 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.104 23:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.672 nvme0n1 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: ]] 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.672 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.931 nvme0n1 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: ]] 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.931 23:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.191 nvme0n1 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: ]] 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.191 nvme0n1 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.191 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: ]] 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.451 nvme0n1 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.451 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.711 nvme0n1 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: ]] 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.711 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.970 nvme0n1 00:36:37.970 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.970 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.970 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.970 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.970 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: ]] 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.971 23:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.230 nvme0n1 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: ]] 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.230 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.489 nvme0n1 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: ]] 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.489 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.748 nvme0n1 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.748 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.007 nvme0n1 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: ]] 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.007 23:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.266 nvme0n1 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: ]] 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.267 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.526 nvme0n1 00:36:39.526 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.526 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.526 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.526 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.526 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.526 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.526 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.527 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.527 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.527 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.786 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.786 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.786 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:39.786 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.786 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: ]] 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.787 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.046 nvme0n1 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: ]] 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:40.046 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.047 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.047 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:40.047 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.047 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:40.047 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:40.047 23:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:40.047 23:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:40.047 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.047 23:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.306 nvme0n1 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.306 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.567 nvme0n1 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: ]] 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.567 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.135 nvme0n1 00:36:41.135 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.135 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.135 23:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.135 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.135 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.135 23:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: ]] 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.135 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.395 nvme0n1 00:36:41.395 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.395 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.395 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.395 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.395 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.395 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.395 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: ]] 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:41.654 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.655 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.914 nvme0n1 00:36:41.914 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: ]] 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.915 23:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.482 nvme0n1 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.482 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.741 nvme0n1 00:36:42.741 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.741 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.741 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.741 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.741 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.741 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: ]] 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.999 23:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.567 nvme0n1 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:43.567 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: ]] 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.568 23:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.135 nvme0n1 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: ]] 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.135 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.702 nvme0n1 00:36:44.702 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.702 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.702 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.702 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.702 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.702 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.702 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.702 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.702 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.702 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: ]] 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.961 23:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.528 nvme0n1 00:36:45.528 23:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.528 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.528 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.528 23:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.528 23:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.528 23:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.528 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.528 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.528 23:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.528 23:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.528 23:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.528 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.529 23:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.097 nvme0n1 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: ]] 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.097 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.356 nvme0n1 00:36:46.356 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.356 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.356 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.356 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.356 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.356 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.356 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.356 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.356 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.356 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.356 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.356 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.356 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:46.356 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.356 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: ]] 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.357 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.616 nvme0n1 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:46.616 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: ]] 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.617 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.876 nvme0n1 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: ]] 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.876 nvme0n1 00:36:46.876 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.135 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.135 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.135 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.135 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.135 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.135 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.135 23:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.135 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.135 23:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:47.135 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:47.136 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:47.136 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.136 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.136 nvme0n1 00:36:47.136 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.136 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.136 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.136 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.136 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.136 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.395 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.395 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.395 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.395 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.395 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.395 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:47.395 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.395 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:47.395 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.395 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:47.395 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:47.395 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: ]] 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.396 nvme0n1 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.396 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: ]] 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.656 nvme0n1 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: ]] 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.656 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.916 nvme0n1 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.916 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: ]] 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.917 23:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.177 nvme0n1 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.177 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.436 nvme0n1 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: ]] 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.436 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.695 nvme0n1 00:36:48.695 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.695 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.695 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:48.695 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.695 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.695 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.695 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.695 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:48.695 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.695 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.954 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.954 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: ]] 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.955 23:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.214 nvme0n1 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: ]] 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:49.214 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:49.215 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:49.215 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.215 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.215 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:49.215 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.215 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:49.215 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:49.215 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:49.215 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:49.215 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.215 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.473 nvme0n1 00:36:49.473 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.473 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.473 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.473 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.473 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.473 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.473 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: ]] 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.474 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.733 nvme0n1 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.733 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.992 nvme0n1 00:36:49.992 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.992 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.992 23:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.992 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.992 23:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.992 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.992 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.992 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.992 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.992 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.992 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.992 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:49.992 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:49.992 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:49.992 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.992 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:49.992 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:49.992 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:49.992 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:49.992 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:49.992 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:49.992 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:49.993 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:49.993 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: ]] 00:36:49.993 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:49.993 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:49.993 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.993 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:49.993 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:49.993 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:49.993 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.993 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:49.993 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.993 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.252 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.252 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.252 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:50.252 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:50.252 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:50.252 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.252 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.252 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:50.252 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.252 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:50.252 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:50.252 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:50.252 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:50.252 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.252 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.511 nvme0n1 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: ]] 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.512 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.079 nvme0n1 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: ]] 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.079 23:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.337 nvme0n1 00:36:51.337 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.337 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.337 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.337 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.337 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.337 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.337 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.337 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.337 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.337 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: ]] 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:51.594 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:51.595 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.595 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.853 nvme0n1 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.853 23:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.487 nvme0n1 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MzNDI1NWM4ODRiM2ZlODhlZGIwODlkMjM3NDM4NTKE47Il: 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: ]] 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZmMTExZGI3ZTllNTNiMTY2ZjI2NGMxZTExYzcxMTMwMTFiYmQ3MjhmYWQyZDVkYTc3OTM5ZmU0OTZjYmEwMfKYtm4=: 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.487 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.052 nvme0n1 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: ]] 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.052 23:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.619 nvme0n1 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc2NDE4ZWVjYWI4Yjc2NTM5YTFhMDk2MDZkZTNiYjRzamdk: 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: ]] 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U0NzEzYzNjOWIyZmM1MTkyYzRhMmQ3YTNmNzVhOTW8MLPP: 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.619 23:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.237 nvme0n1 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQxYjNiMGY2NzlhYjA5MTg0Mzc4ZGE0MDYyZjE1OTA5YzVkOTYwYjk0OGJmNjNivnWZ/w==: 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: ]] 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2ViZTBjZDc1YzA1ZDdiMmU1NmJmZTg5YjdkYzY3YWakVoYJ: 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.237 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.171 nvme0n1 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjViMDRhYWFhMGJmOTlkNmQwNmM4ODA3YzFkYjg3Y2ViYzZhNzFiN2RkYjM1ODZmYmY4YWI3MDE5ZGQyNmY3OMrg5sw=: 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.171 23:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.738 nvme0n1 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDllYzk4Y2U3OTMxNTZhNTU1ZjcxNWNiOGJiYmJiYjcyOTE5NDI5Yzk2NTRkZWIy9DuMHQ==: 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: ]] 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2Y3MWZiNDZlYjNlYWUwMTZkYzFiYzEwNmMwODVjZWQwNWYzNDNjOTA1Nzg1NTQ5Zbacgw==: 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.738 request: 00:36:55.738 { 00:36:55.738 "name": "nvme0", 00:36:55.738 "trtype": "tcp", 00:36:55.738 "traddr": "10.0.0.1", 00:36:55.738 "adrfam": "ipv4", 00:36:55.738 "trsvcid": "4420", 00:36:55.738 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:55.738 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:55.738 "prchk_reftag": false, 00:36:55.738 "prchk_guard": false, 00:36:55.738 "hdgst": false, 00:36:55.738 "ddgst": false, 00:36:55.738 "method": "bdev_nvme_attach_controller", 00:36:55.738 "req_id": 1 00:36:55.738 } 00:36:55.738 Got JSON-RPC error response 00:36:55.738 response: 00:36:55.738 { 00:36:55.738 "code": -5, 00:36:55.738 "message": "Input/output error" 00:36:55.738 } 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.738 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.738 request: 00:36:55.738 { 00:36:55.738 "name": "nvme0", 00:36:55.738 "trtype": "tcp", 00:36:55.738 "traddr": "10.0.0.1", 00:36:55.738 "adrfam": "ipv4", 00:36:55.738 "trsvcid": "4420", 00:36:55.739 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:55.739 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:55.739 "prchk_reftag": false, 00:36:55.739 "prchk_guard": false, 00:36:55.739 "hdgst": false, 00:36:55.739 "ddgst": false, 00:36:55.739 "dhchap_key": "key2", 00:36:55.739 "method": "bdev_nvme_attach_controller", 00:36:55.739 "req_id": 1 00:36:55.739 } 00:36:55.739 Got JSON-RPC error response 00:36:55.739 response: 00:36:55.739 { 00:36:55.739 "code": -5, 00:36:55.739 "message": "Input/output error" 00:36:55.739 } 00:36:55.739 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:55.739 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:36:55.739 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:55.739 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:55.739 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:55.739 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.739 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:55.739 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.739 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.739 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.997 request: 00:36:55.997 { 00:36:55.997 "name": "nvme0", 00:36:55.997 "trtype": "tcp", 00:36:55.997 "traddr": "10.0.0.1", 00:36:55.997 "adrfam": "ipv4", 00:36:55.997 "trsvcid": "4420", 00:36:55.997 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:55.997 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:55.997 "prchk_reftag": false, 00:36:55.997 "prchk_guard": false, 00:36:55.997 "hdgst": false, 00:36:55.997 "ddgst": false, 00:36:55.997 "dhchap_key": "key1", 00:36:55.997 "dhchap_ctrlr_key": "ckey2", 00:36:55.997 "method": "bdev_nvme_attach_controller", 00:36:55.997 "req_id": 1 00:36:55.997 } 00:36:55.997 Got JSON-RPC error response 00:36:55.997 response: 00:36:55.997 { 00:36:55.997 "code": -5, 00:36:55.997 "message": "Input/output error" 00:36:55.997 } 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:55.997 rmmod nvme_tcp 00:36:55.997 rmmod nvme_fabrics 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2642213 ']' 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2642213 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2642213 ']' 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2642213 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2642213 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2642213' 00:36:55.997 killing process with pid 2642213 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2642213 00:36:55.997 23:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2642213 00:36:57.376 23:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:57.376 23:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:57.376 23:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:57.376 23:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:57.376 23:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:57.376 23:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:57.376 23:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:57.376 23:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:59.284 23:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:59.284 23:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:59.284 23:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:59.284 23:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:59.284 23:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:59.284 23:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:36:59.284 23:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:59.284 23:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:59.284 23:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:59.284 23:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:59.284 23:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:59.284 23:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:59.284 23:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:01.821 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:01.821 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:01.821 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:01.821 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:01.821 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:01.821 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:01.821 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:01.821 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:01.821 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:01.821 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:01.821 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:01.821 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:01.821 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:01.821 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:01.821 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:01.821 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:02.389 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:37:02.389 23:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.fEq /tmp/spdk.key-null.wOQ /tmp/spdk.key-sha256.SUq /tmp/spdk.key-sha384.IZS /tmp/spdk.key-sha512.5we /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:37:02.389 23:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:04.924 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:37:04.924 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:37:04.924 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:37:04.924 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:37:04.924 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:37:04.924 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:37:04.924 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:37:04.924 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:37:04.924 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:37:04.924 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:37:04.924 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:37:04.924 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:37:04.924 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:37:04.924 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:37:04.924 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:37:04.924 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:37:04.924 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:37:04.924 00:37:04.924 real 0m49.307s 00:37:04.924 user 0m44.640s 00:37:04.924 sys 0m11.291s 00:37:04.924 23:41:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:04.924 23:41:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.924 ************************************ 00:37:04.924 END TEST nvmf_auth_host 00:37:04.924 ************************************ 00:37:04.924 23:41:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:04.924 23:41:13 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:37:04.924 23:41:13 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:04.924 23:41:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:04.924 23:41:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:04.924 23:41:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:04.924 ************************************ 00:37:04.924 START TEST nvmf_digest 00:37:04.924 ************************************ 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:04.924 * Looking for test storage... 00:37:04.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:04.924 23:41:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:37:04.925 23:41:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:10.196 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:10.196 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:10.196 Found net devices under 0000:86:00.0: cvl_0_0 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:10.196 Found net devices under 0000:86:00.1: cvl_0_1 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:10.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:10.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:37:10.196 00:37:10.196 --- 10.0.0.2 ping statistics --- 00:37:10.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:10.196 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:37:10.196 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:10.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:10.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:37:10.455 00:37:10.455 --- 10.0.0.1 ping statistics --- 00:37:10.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:10.455 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:10.455 ************************************ 00:37:10.455 START TEST nvmf_digest_clean 00:37:10.455 ************************************ 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2655264 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2655264 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2655264 ']' 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:10.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:10.455 23:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:10.455 [2024-07-10 23:41:19.404562] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:37:10.455 [2024-07-10 23:41:19.404647] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:10.455 EAL: No free 2048 kB hugepages reported on node 1 00:37:10.455 [2024-07-10 23:41:19.513438] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:10.714 [2024-07-10 23:41:19.730749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:10.714 [2024-07-10 23:41:19.730798] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:10.714 [2024-07-10 23:41:19.730811] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:10.714 [2024-07-10 23:41:19.730821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:10.714 [2024-07-10 23:41:19.730830] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:10.714 [2024-07-10 23:41:19.730858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.282 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:11.282 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:37:11.282 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:11.282 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:11.282 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:11.282 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:11.282 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:37:11.282 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:37:11.282 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:37:11.282 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.282 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:11.541 null0 00:37:11.541 [2024-07-10 23:41:20.591120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:11.800 [2024-07-10 23:41:20.615332] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:11.800 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:11.800 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:37:11.800 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:11.800 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:11.800 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:11.800 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:11.800 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:11.800 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:11.800 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2655508 00:37:11.800 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2655508 /var/tmp/bperf.sock 00:37:11.800 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:11.800 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2655508 ']' 00:37:11.801 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:11.801 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:11.801 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:11.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:11.801 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:11.801 23:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:11.801 [2024-07-10 23:41:20.694029] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:37:11.801 [2024-07-10 23:41:20.694118] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2655508 ] 00:37:11.801 EAL: No free 2048 kB hugepages reported on node 1 00:37:11.801 [2024-07-10 23:41:20.796775] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.059 [2024-07-10 23:41:21.015322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.626 23:41:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:12.626 23:41:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:37:12.626 23:41:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:12.626 23:41:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:12.626 23:41:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:13.193 23:41:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:13.193 23:41:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:13.193 nvme0n1 00:37:13.193 23:41:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:13.193 23:41:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:13.452 Running I/O for 2 seconds... 00:37:15.354 00:37:15.354 Latency(us) 00:37:15.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.354 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:15.354 nvme0n1 : 2.00 22406.33 87.52 0.00 0.00 5706.44 2778.16 13335.15 00:37:15.354 =================================================================================================================== 00:37:15.354 Total : 22406.33 87.52 0.00 0.00 5706.44 2778.16 13335.15 00:37:15.354 0 00:37:15.354 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:15.354 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:15.354 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:15.354 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:15.354 | select(.opcode=="crc32c") 00:37:15.354 | "\(.module_name) \(.executed)"' 00:37:15.354 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:15.613 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:15.613 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:15.613 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:15.613 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:15.613 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2655508 00:37:15.613 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2655508 ']' 00:37:15.613 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2655508 00:37:15.613 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:37:15.613 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:15.613 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2655508 00:37:15.613 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:15.613 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:15.613 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2655508' 00:37:15.613 killing process with pid 2655508 00:37:15.613 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2655508 00:37:15.613 Received shutdown signal, test time was about 2.000000 seconds 00:37:15.613 00:37:15.613 Latency(us) 00:37:15.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.613 =================================================================================================================== 00:37:15.613 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:15.613 23:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2655508 00:37:16.548 23:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:37:16.548 23:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:16.548 23:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:16.548 23:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:16.548 23:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:16.549 23:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:16.549 23:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:16.549 23:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2656220 00:37:16.549 23:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2656220 /var/tmp/bperf.sock 00:37:16.549 23:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:16.549 23:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2656220 ']' 00:37:16.549 23:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:16.549 23:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:16.549 23:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:16.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:16.549 23:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:16.549 23:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:16.807 [2024-07-10 23:41:25.676347] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:37:16.807 [2024-07-10 23:41:25.676442] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2656220 ] 00:37:16.807 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:16.807 Zero copy mechanism will not be used. 00:37:16.807 EAL: No free 2048 kB hugepages reported on node 1 00:37:16.807 [2024-07-10 23:41:25.780229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.066 [2024-07-10 23:41:26.007304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:17.633 23:41:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:17.633 23:41:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:37:17.633 23:41:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:17.633 23:41:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:17.633 23:41:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:18.201 23:41:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:18.201 23:41:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:18.459 nvme0n1 00:37:18.459 23:41:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:18.459 23:41:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:18.459 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:18.459 Zero copy mechanism will not be used. 00:37:18.459 Running I/O for 2 seconds... 00:37:20.383 00:37:20.383 Latency(us) 00:37:20.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.383 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:20.383 nvme0n1 : 2.00 3915.90 489.49 0.00 0.00 4082.96 1082.77 9516.97 00:37:20.383 =================================================================================================================== 00:37:20.383 Total : 3915.90 489.49 0.00 0.00 4082.96 1082.77 9516.97 00:37:20.642 0 00:37:20.642 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:20.642 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:20.642 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:20.642 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:20.642 | select(.opcode=="crc32c") 00:37:20.642 | "\(.module_name) \(.executed)"' 00:37:20.642 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:20.642 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:20.642 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:20.642 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:20.642 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:20.642 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2656220 00:37:20.642 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2656220 ']' 00:37:20.642 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2656220 00:37:20.642 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:37:20.642 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:20.642 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2656220 00:37:20.642 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:20.643 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:20.643 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2656220' 00:37:20.643 killing process with pid 2656220 00:37:20.643 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2656220 00:37:20.643 Received shutdown signal, test time was about 2.000000 seconds 00:37:20.643 00:37:20.643 Latency(us) 00:37:20.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.643 =================================================================================================================== 00:37:20.643 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:20.643 23:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2656220 00:37:22.060 23:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:37:22.060 23:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:22.060 23:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:22.060 23:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:22.060 23:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:22.060 23:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:22.060 23:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:22.060 23:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2657143 00:37:22.060 23:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2657143 /var/tmp/bperf.sock 00:37:22.060 23:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:22.060 23:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2657143 ']' 00:37:22.060 23:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:22.060 23:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:22.060 23:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:22.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:22.060 23:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:22.060 23:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:22.060 [2024-07-10 23:41:30.827623] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:37:22.060 [2024-07-10 23:41:30.827717] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2657143 ] 00:37:22.060 EAL: No free 2048 kB hugepages reported on node 1 00:37:22.060 [2024-07-10 23:41:30.930513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.319 [2024-07-10 23:41:31.155123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:22.577 23:41:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:22.577 23:41:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:37:22.577 23:41:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:22.577 23:41:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:22.577 23:41:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:23.145 23:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:23.145 23:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:23.713 nvme0n1 00:37:23.713 23:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:23.713 23:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:23.713 Running I/O for 2 seconds... 00:37:25.616 00:37:25.616 Latency(us) 00:37:25.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.616 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:25.616 nvme0n1 : 2.00 24346.42 95.10 0.00 0.00 5250.19 2208.28 9972.87 00:37:25.616 =================================================================================================================== 00:37:25.616 Total : 24346.42 95.10 0.00 0.00 5250.19 2208.28 9972.87 00:37:25.616 0 00:37:25.616 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:25.616 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:25.616 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:25.616 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:25.616 | select(.opcode=="crc32c") 00:37:25.616 | "\(.module_name) \(.executed)"' 00:37:25.616 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:25.875 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:25.875 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:25.875 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:25.875 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:25.875 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2657143 00:37:25.875 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2657143 ']' 00:37:25.875 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2657143 00:37:25.875 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:37:25.875 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:25.875 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2657143 00:37:25.875 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:25.875 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:25.875 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2657143' 00:37:25.875 killing process with pid 2657143 00:37:25.875 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2657143 00:37:25.875 Received shutdown signal, test time was about 2.000000 seconds 00:37:25.875 00:37:25.875 Latency(us) 00:37:25.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.875 =================================================================================================================== 00:37:25.875 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:25.875 23:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2657143 00:37:27.251 23:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:37:27.251 23:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:27.251 23:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:27.251 23:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:27.251 23:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:27.251 23:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:27.251 23:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:27.251 23:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2657855 00:37:27.251 23:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2657855 /var/tmp/bperf.sock 00:37:27.251 23:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:27.251 23:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2657855 ']' 00:37:27.251 23:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:27.251 23:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:27.251 23:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:27.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:27.251 23:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:27.251 23:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:27.251 [2024-07-10 23:41:35.970928] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:37:27.251 [2024-07-10 23:41:35.971039] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2657855 ] 00:37:27.251 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:27.251 Zero copy mechanism will not be used. 00:37:27.251 EAL: No free 2048 kB hugepages reported on node 1 00:37:27.251 [2024-07-10 23:41:36.073621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:27.251 [2024-07-10 23:41:36.301192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:27.820 23:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:27.820 23:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:37:27.820 23:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:27.820 23:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:27.820 23:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:28.387 23:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:28.387 23:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:28.646 nvme0n1 00:37:28.646 23:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:28.646 23:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:28.903 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:28.903 Zero copy mechanism will not be used. 00:37:28.903 Running I/O for 2 seconds... 00:37:30.805 00:37:30.805 Latency(us) 00:37:30.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:30.805 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:30.805 nvme0n1 : 2.00 6021.87 752.73 0.00 0.00 2652.22 2023.07 9516.97 00:37:30.805 =================================================================================================================== 00:37:30.805 Total : 6021.87 752.73 0.00 0.00 2652.22 2023.07 9516.97 00:37:30.805 0 00:37:30.805 23:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:30.805 23:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:30.805 23:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:30.805 23:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:30.805 | select(.opcode=="crc32c") 00:37:30.805 | "\(.module_name) \(.executed)"' 00:37:30.805 23:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:31.063 23:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:31.063 23:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:31.063 23:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:31.063 23:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:31.063 23:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2657855 00:37:31.063 23:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2657855 ']' 00:37:31.063 23:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2657855 00:37:31.063 23:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:37:31.063 23:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:31.063 23:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2657855 00:37:31.063 23:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:31.063 23:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:31.063 23:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2657855' 00:37:31.063 killing process with pid 2657855 00:37:31.063 23:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2657855 00:37:31.063 Received shutdown signal, test time was about 2.000000 seconds 00:37:31.063 00:37:31.063 Latency(us) 00:37:31.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:31.063 =================================================================================================================== 00:37:31.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:31.063 23:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2657855 00:37:32.440 23:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2655264 00:37:32.440 23:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2655264 ']' 00:37:32.440 23:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2655264 00:37:32.440 23:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:37:32.440 23:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:32.440 23:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2655264 00:37:32.440 23:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:32.440 23:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:32.440 23:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2655264' 00:37:32.440 killing process with pid 2655264 00:37:32.440 23:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2655264 00:37:32.440 23:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2655264 00:37:33.375 00:37:33.375 real 0m23.103s 00:37:33.375 user 0m43.061s 00:37:33.375 sys 0m4.706s 00:37:33.375 23:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:33.375 23:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:33.375 ************************************ 00:37:33.375 END TEST nvmf_digest_clean 00:37:33.375 ************************************ 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:33.635 ************************************ 00:37:33.635 START TEST nvmf_digest_error 00:37:33.635 ************************************ 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2659019 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2659019 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2659019 ']' 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:33.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:33.635 23:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:33.635 [2024-07-10 23:41:42.579370] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:37:33.635 [2024-07-10 23:41:42.579459] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:33.635 EAL: No free 2048 kB hugepages reported on node 1 00:37:33.635 [2024-07-10 23:41:42.688210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:33.894 [2024-07-10 23:41:42.899763] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:33.894 [2024-07-10 23:41:42.899807] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:33.894 [2024-07-10 23:41:42.899818] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:33.894 [2024-07-10 23:41:42.899829] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:33.894 [2024-07-10 23:41:42.899838] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:33.894 [2024-07-10 23:41:42.899870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:34.462 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:34.462 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:37:34.462 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:34.462 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:34.462 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:34.462 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:34.462 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:37:34.462 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.462 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:34.462 [2024-07-10 23:41:43.389616] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:37:34.462 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.462 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:37:34.462 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:37:34.462 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.462 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:34.720 null0 00:37:34.720 [2024-07-10 23:41:43.755323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:34.720 [2024-07-10 23:41:43.779511] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:34.720 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.720 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:37:34.720 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:34.720 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:34.720 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:34.720 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:34.720 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2659270 00:37:34.720 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2659270 /var/tmp/bperf.sock 00:37:34.720 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:37:34.720 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2659270 ']' 00:37:34.979 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:34.979 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:34.979 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:34.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:34.979 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:34.979 23:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:34.979 [2024-07-10 23:41:43.857062] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:37:34.979 [2024-07-10 23:41:43.857177] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2659270 ] 00:37:34.979 EAL: No free 2048 kB hugepages reported on node 1 00:37:34.979 [2024-07-10 23:41:43.959462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:35.237 [2024-07-10 23:41:44.185115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:35.805 23:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:35.805 23:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:37:35.805 23:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:35.805 23:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:35.805 23:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:35.805 23:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.805 23:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:35.805 23:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.806 23:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:35.806 23:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:36.375 nvme0n1 00:37:36.375 23:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:36.375 23:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.375 23:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:36.375 23:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.375 23:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:36.375 23:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:36.375 Running I/O for 2 seconds... 00:37:36.375 [2024-07-10 23:41:45.387076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.375 [2024-07-10 23:41:45.387120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.375 [2024-07-10 23:41:45.387136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.375 [2024-07-10 23:41:45.401749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.375 [2024-07-10 23:41:45.401782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.375 [2024-07-10 23:41:45.401796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.375 [2024-07-10 23:41:45.414077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.375 [2024-07-10 23:41:45.414108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.375 [2024-07-10 23:41:45.414121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.375 [2024-07-10 23:41:45.423929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.375 [2024-07-10 23:41:45.423957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.375 [2024-07-10 23:41:45.423970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.375 [2024-07-10 23:41:45.436614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.375 [2024-07-10 23:41:45.436641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.375 [2024-07-10 23:41:45.436654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.634 [2024-07-10 23:41:45.447544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.634 [2024-07-10 23:41:45.447572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.634 [2024-07-10 23:41:45.447589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.634 [2024-07-10 23:41:45.457450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.634 [2024-07-10 23:41:45.457476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.634 [2024-07-10 23:41:45.457488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.634 [2024-07-10 23:41:45.468459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.634 [2024-07-10 23:41:45.468486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.634 [2024-07-10 23:41:45.468498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.634 [2024-07-10 23:41:45.478948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.634 [2024-07-10 23:41:45.478975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.634 [2024-07-10 23:41:45.478999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.634 [2024-07-10 23:41:45.489834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.634 [2024-07-10 23:41:45.489862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.634 [2024-07-10 23:41:45.489874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.634 [2024-07-10 23:41:45.503313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.634 [2024-07-10 23:41:45.503340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.634 [2024-07-10 23:41:45.503352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.634 [2024-07-10 23:41:45.512922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.634 [2024-07-10 23:41:45.512949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.635 [2024-07-10 23:41:45.512961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.635 [2024-07-10 23:41:45.525199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.635 [2024-07-10 23:41:45.525224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.635 [2024-07-10 23:41:45.525237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.635 [2024-07-10 23:41:45.536002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.635 [2024-07-10 23:41:45.536029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.635 [2024-07-10 23:41:45.536041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.635 [2024-07-10 23:41:45.546959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.635 [2024-07-10 23:41:45.546991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.635 [2024-07-10 23:41:45.547003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.635 [2024-07-10 23:41:45.556968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.635 [2024-07-10 23:41:45.556995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.635 [2024-07-10 23:41:45.557008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.635 [2024-07-10 23:41:45.568422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.635 [2024-07-10 23:41:45.568450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.635 [2024-07-10 23:41:45.568463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.635 [2024-07-10 23:41:45.579406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.635 [2024-07-10 23:41:45.579434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.635 [2024-07-10 23:41:45.579446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.635 [2024-07-10 23:41:45.589683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.635 [2024-07-10 23:41:45.589711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.635 [2024-07-10 23:41:45.589723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.635 [2024-07-10 23:41:45.603137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.635 [2024-07-10 23:41:45.603173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.635 [2024-07-10 23:41:45.603186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.635 [2024-07-10 23:41:45.613335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.635 [2024-07-10 23:41:45.613363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.635 [2024-07-10 23:41:45.613375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.635 [2024-07-10 23:41:45.625171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.635 [2024-07-10 23:41:45.625199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.635 [2024-07-10 23:41:45.625212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.635 [2024-07-10 23:41:45.636495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.635 [2024-07-10 23:41:45.636522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.635 [2024-07-10 23:41:45.636534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.635 [2024-07-10 23:41:45.647043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.635 [2024-07-10 23:41:45.647069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.635 [2024-07-10 23:41:45.647082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.635 [2024-07-10 23:41:45.657958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.635 [2024-07-10 23:41:45.657985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.635 [2024-07-10 23:41:45.657997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.635 [2024-07-10 23:41:45.668260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.635 [2024-07-10 23:41:45.668286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.635 [2024-07-10 23:41:45.668298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.635 [2024-07-10 23:41:45.681341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.635 [2024-07-10 23:41:45.681368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.635 [2024-07-10 23:41:45.681380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.635 [2024-07-10 23:41:45.691469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.635 [2024-07-10 23:41:45.691495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.635 [2024-07-10 23:41:45.691507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.704772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.704800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.704814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.714755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.714781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.714794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.726880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.726906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.726918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.741411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.741442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.741454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.754601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.754629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.754640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.764573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.764599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.764611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.778845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.778871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.778883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.792319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.792346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.792358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.806017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.806045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.806057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.815465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.815491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.815504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.829180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.829208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.829220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.841032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.841059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.841071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.852189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.852216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.852227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.865303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.865331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.865343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.876782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.876809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.876821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.886002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.886029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.886042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.897493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.897521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.897533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.909121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.909148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.909166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.919084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.919110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.919123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.932083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.932111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.932123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.945476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.945507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.945519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.895 [2024-07-10 23:41:45.958710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:36.895 [2024-07-10 23:41:45.958737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.895 [2024-07-10 23:41:45.958749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.155 [2024-07-10 23:41:45.969005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.155 [2024-07-10 23:41:45.969033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.155 [2024-07-10 23:41:45.969045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.155 [2024-07-10 23:41:45.982696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.155 [2024-07-10 23:41:45.982732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.155 [2024-07-10 23:41:45.982744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.155 [2024-07-10 23:41:45.991693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.155 [2024-07-10 23:41:45.991720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.155 [2024-07-10 23:41:45.991732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.155 [2024-07-10 23:41:46.003274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.155 [2024-07-10 23:41:46.003301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.155 [2024-07-10 23:41:46.003313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.155 [2024-07-10 23:41:46.014849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.155 [2024-07-10 23:41:46.014876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.155 [2024-07-10 23:41:46.014889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.155 [2024-07-10 23:41:46.025693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.155 [2024-07-10 23:41:46.025720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.155 [2024-07-10 23:41:46.025732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.155 [2024-07-10 23:41:46.035282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.155 [2024-07-10 23:41:46.035310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.156 [2024-07-10 23:41:46.035323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.156 [2024-07-10 23:41:46.046086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.156 [2024-07-10 23:41:46.046114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.156 [2024-07-10 23:41:46.046127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.156 [2024-07-10 23:41:46.056387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.156 [2024-07-10 23:41:46.056415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.156 [2024-07-10 23:41:46.056427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.156 [2024-07-10 23:41:46.067697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.156 [2024-07-10 23:41:46.067725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.156 [2024-07-10 23:41:46.067737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.156 [2024-07-10 23:41:46.079084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.156 [2024-07-10 23:41:46.079114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.156 [2024-07-10 23:41:46.079126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.156 [2024-07-10 23:41:46.090284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.156 [2024-07-10 23:41:46.090314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.156 [2024-07-10 23:41:46.090326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.156 [2024-07-10 23:41:46.100729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.156 [2024-07-10 23:41:46.100757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.156 [2024-07-10 23:41:46.100769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.156 [2024-07-10 23:41:46.112142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.156 [2024-07-10 23:41:46.112176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.156 [2024-07-10 23:41:46.112188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.156 [2024-07-10 23:41:46.123260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.156 [2024-07-10 23:41:46.123287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.156 [2024-07-10 23:41:46.123299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.156 [2024-07-10 23:41:46.133186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.156 [2024-07-10 23:41:46.133221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.156 [2024-07-10 23:41:46.133233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.156 [2024-07-10 23:41:46.147441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.156 [2024-07-10 23:41:46.147469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.156 [2024-07-10 23:41:46.147481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.156 [2024-07-10 23:41:46.156277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.156 [2024-07-10 23:41:46.156303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.156 [2024-07-10 23:41:46.156315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.156 [2024-07-10 23:41:46.170140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.156 [2024-07-10 23:41:46.170174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.156 [2024-07-10 23:41:46.170187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.156 [2024-07-10 23:41:46.181217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.156 [2024-07-10 23:41:46.181244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.156 [2024-07-10 23:41:46.181256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.156 [2024-07-10 23:41:46.192508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.156 [2024-07-10 23:41:46.192535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.156 [2024-07-10 23:41:46.192547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.156 [2024-07-10 23:41:46.204858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.156 [2024-07-10 23:41:46.204886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.156 [2024-07-10 23:41:46.204898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.156 [2024-07-10 23:41:46.214542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.156 [2024-07-10 23:41:46.214569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.156 [2024-07-10 23:41:46.214581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.416 [2024-07-10 23:41:46.227040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.416 [2024-07-10 23:41:46.227067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.416 [2024-07-10 23:41:46.227080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.416 [2024-07-10 23:41:46.239854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.416 [2024-07-10 23:41:46.239881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.416 [2024-07-10 23:41:46.239893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.416 [2024-07-10 23:41:46.250350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.416 [2024-07-10 23:41:46.250378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.416 [2024-07-10 23:41:46.250390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.416 [2024-07-10 23:41:46.260041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.416 [2024-07-10 23:41:46.260069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.416 [2024-07-10 23:41:46.260081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.416 [2024-07-10 23:41:46.273142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.416 [2024-07-10 23:41:46.273175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.416 [2024-07-10 23:41:46.273187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.416 [2024-07-10 23:41:46.284415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.416 [2024-07-10 23:41:46.284443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.416 [2024-07-10 23:41:46.284456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.416 [2024-07-10 23:41:46.295007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.416 [2024-07-10 23:41:46.295036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.416 [2024-07-10 23:41:46.295048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.416 [2024-07-10 23:41:46.305045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.416 [2024-07-10 23:41:46.305072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.416 [2024-07-10 23:41:46.305084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.416 [2024-07-10 23:41:46.317580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.416 [2024-07-10 23:41:46.317607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.416 [2024-07-10 23:41:46.317619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.416 [2024-07-10 23:41:46.329424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.416 [2024-07-10 23:41:46.329455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.416 [2024-07-10 23:41:46.329467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.416 [2024-07-10 23:41:46.340378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.416 [2024-07-10 23:41:46.340406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.416 [2024-07-10 23:41:46.340418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.417 [2024-07-10 23:41:46.350617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.417 [2024-07-10 23:41:46.350644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.417 [2024-07-10 23:41:46.350656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.417 [2024-07-10 23:41:46.362153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.417 [2024-07-10 23:41:46.362186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.417 [2024-07-10 23:41:46.362198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.417 [2024-07-10 23:41:46.371855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.417 [2024-07-10 23:41:46.371882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.417 [2024-07-10 23:41:46.371894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.417 [2024-07-10 23:41:46.384054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.417 [2024-07-10 23:41:46.384082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.417 [2024-07-10 23:41:46.384094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.417 [2024-07-10 23:41:46.395058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.417 [2024-07-10 23:41:46.395086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.417 [2024-07-10 23:41:46.395098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.417 [2024-07-10 23:41:46.405026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.417 [2024-07-10 23:41:46.405054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.417 [2024-07-10 23:41:46.405066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.417 [2024-07-10 23:41:46.418078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.417 [2024-07-10 23:41:46.418107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.417 [2024-07-10 23:41:46.418119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.417 [2024-07-10 23:41:46.428887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.417 [2024-07-10 23:41:46.428914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.417 [2024-07-10 23:41:46.428927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.417 [2024-07-10 23:41:46.439659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.417 [2024-07-10 23:41:46.439695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.417 [2024-07-10 23:41:46.439707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.417 [2024-07-10 23:41:46.449130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.417 [2024-07-10 23:41:46.449156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.417 [2024-07-10 23:41:46.449177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.417 [2024-07-10 23:41:46.461898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.417 [2024-07-10 23:41:46.461927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.417 [2024-07-10 23:41:46.461939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.417 [2024-07-10 23:41:46.471482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.417 [2024-07-10 23:41:46.471509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.417 [2024-07-10 23:41:46.471521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.677 [2024-07-10 23:41:46.483684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.677 [2024-07-10 23:41:46.483715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.677 [2024-07-10 23:41:46.483728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.677 [2024-07-10 23:41:46.494427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.677 [2024-07-10 23:41:46.494453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.677 [2024-07-10 23:41:46.494465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.677 [2024-07-10 23:41:46.505127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.677 [2024-07-10 23:41:46.505154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.677 [2024-07-10 23:41:46.505172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.677 [2024-07-10 23:41:46.514690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.677 [2024-07-10 23:41:46.514717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.677 [2024-07-10 23:41:46.514733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.527601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.527627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.527639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.538683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.538708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.538720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.549441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.549468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.549480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.560010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.560037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.560048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.570363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.570390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.570402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.581002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.581031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.581043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.592813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.592840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.592852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.603284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.603311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.603323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.615036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.615064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.615076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.625857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.625884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.625896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.636387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.636413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.636426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.648097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.648124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.648136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.658022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.658050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.658062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.671928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.671956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.671968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.681665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.681693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.681706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.697340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.697367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.697380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.706865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.706890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.706909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.720670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.720698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.720710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.678 [2024-07-10 23:41:46.735578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.678 [2024-07-10 23:41:46.735605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.678 [2024-07-10 23:41:46.735617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.938 [2024-07-10 23:41:46.748452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.938 [2024-07-10 23:41:46.748480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.938 [2024-07-10 23:41:46.748492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.938 [2024-07-10 23:41:46.758879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.938 [2024-07-10 23:41:46.758904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.938 [2024-07-10 23:41:46.758917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.938 [2024-07-10 23:41:46.773500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.938 [2024-07-10 23:41:46.773527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.938 [2024-07-10 23:41:46.773539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.938 [2024-07-10 23:41:46.788175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.938 [2024-07-10 23:41:46.788201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.938 [2024-07-10 23:41:46.788213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.938 [2024-07-10 23:41:46.802643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.938 [2024-07-10 23:41:46.802669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.938 [2024-07-10 23:41:46.802681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.938 [2024-07-10 23:41:46.812409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.938 [2024-07-10 23:41:46.812436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.938 [2024-07-10 23:41:46.812448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.938 [2024-07-10 23:41:46.823401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.938 [2024-07-10 23:41:46.823428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.938 [2024-07-10 23:41:46.823440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.938 [2024-07-10 23:41:46.832995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.938 [2024-07-10 23:41:46.833022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.938 [2024-07-10 23:41:46.833034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.938 [2024-07-10 23:41:46.845463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.938 [2024-07-10 23:41:46.845491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.938 [2024-07-10 23:41:46.845503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.939 [2024-07-10 23:41:46.858381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.939 [2024-07-10 23:41:46.858410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.939 [2024-07-10 23:41:46.858422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.939 [2024-07-10 23:41:46.869071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.939 [2024-07-10 23:41:46.869098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.939 [2024-07-10 23:41:46.869109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.939 [2024-07-10 23:41:46.879965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.939 [2024-07-10 23:41:46.879992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.939 [2024-07-10 23:41:46.880003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.939 [2024-07-10 23:41:46.891670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.939 [2024-07-10 23:41:46.891696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.939 [2024-07-10 23:41:46.891709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.939 [2024-07-10 23:41:46.901593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.939 [2024-07-10 23:41:46.901620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.939 [2024-07-10 23:41:46.901632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.939 [2024-07-10 23:41:46.912923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.939 [2024-07-10 23:41:46.912949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.939 [2024-07-10 23:41:46.912965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.939 [2024-07-10 23:41:46.924211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.939 [2024-07-10 23:41:46.924238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.939 [2024-07-10 23:41:46.924249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.939 [2024-07-10 23:41:46.934724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.939 [2024-07-10 23:41:46.934751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.939 [2024-07-10 23:41:46.934763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.939 [2024-07-10 23:41:46.945183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.939 [2024-07-10 23:41:46.945209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.939 [2024-07-10 23:41:46.945228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.939 [2024-07-10 23:41:46.956242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.939 [2024-07-10 23:41:46.956268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.939 [2024-07-10 23:41:46.956280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.939 [2024-07-10 23:41:46.968739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.939 [2024-07-10 23:41:46.968766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.939 [2024-07-10 23:41:46.968778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.939 [2024-07-10 23:41:46.978684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.939 [2024-07-10 23:41:46.978711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.939 [2024-07-10 23:41:46.978723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.939 [2024-07-10 23:41:46.990911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.939 [2024-07-10 23:41:46.990938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.939 [2024-07-10 23:41:46.990950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:37.939 [2024-07-10 23:41:47.002041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:37.939 [2024-07-10 23:41:47.002067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.939 [2024-07-10 23:41:47.002079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.198 [2024-07-10 23:41:47.012519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.198 [2024-07-10 23:41:47.012547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.012558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.023142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.023175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.023187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.033981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.034008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.034020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.044099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.044126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.044139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.055522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.055548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.055560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.066350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.066377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.066389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.076617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.076644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.076655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.088030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.088057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.088069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.098478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.098505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.098520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.111289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.111315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.111327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.121520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.121547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.121559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.134775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.134801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.134813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.144814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.144840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.144853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.158248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.158275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.158287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.168333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.168360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.168372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.179063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.179089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.179101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.189929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.189955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.189967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.203346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.203373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.203385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.213376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.213403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.213414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.227993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.228020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.228032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.237408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.237434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.237446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.250784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.250811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.250823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.199 [2024-07-10 23:41:47.261704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.199 [2024-07-10 23:41:47.261730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.199 [2024-07-10 23:41:47.261742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.459 [2024-07-10 23:41:47.271274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.459 [2024-07-10 23:41:47.271301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.459 [2024-07-10 23:41:47.271314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.459 [2024-07-10 23:41:47.283590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.459 [2024-07-10 23:41:47.283618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.459 [2024-07-10 23:41:47.283630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.459 [2024-07-10 23:41:47.293260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.459 [2024-07-10 23:41:47.293286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.459 [2024-07-10 23:41:47.293303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.459 [2024-07-10 23:41:47.304370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.459 [2024-07-10 23:41:47.304397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.459 [2024-07-10 23:41:47.304409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.459 [2024-07-10 23:41:47.314773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.459 [2024-07-10 23:41:47.314800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.459 [2024-07-10 23:41:47.314812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.459 [2024-07-10 23:41:47.327398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.459 [2024-07-10 23:41:47.327425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.459 [2024-07-10 23:41:47.327437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.459 [2024-07-10 23:41:47.340035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.459 [2024-07-10 23:41:47.340062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.459 [2024-07-10 23:41:47.340075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.459 [2024-07-10 23:41:47.350093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.459 [2024-07-10 23:41:47.350121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.459 [2024-07-10 23:41:47.350133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.459 [2024-07-10 23:41:47.362495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.459 [2024-07-10 23:41:47.362522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.459 [2024-07-10 23:41:47.362535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.459 [2024-07-10 23:41:47.377555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:38.459 [2024-07-10 23:41:47.377583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.459 [2024-07-10 23:41:47.377595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.459 00:37:38.459 Latency(us) 00:37:38.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:38.459 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:38.459 nvme0n1 : 2.01 22319.40 87.19 0.00 0.00 5726.99 2963.37 19831.76 00:37:38.459 =================================================================================================================== 00:37:38.459 Total : 22319.40 87.19 0.00 0.00 5726.99 2963.37 19831.76 00:37:38.459 0 00:37:38.459 23:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:38.459 23:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:38.459 23:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:38.459 | .driver_specific 00:37:38.459 | .nvme_error 00:37:38.459 | .status_code 00:37:38.459 | .command_transient_transport_error' 00:37:38.459 23:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:38.718 23:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 175 > 0 )) 00:37:38.718 23:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2659270 00:37:38.718 23:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2659270 ']' 00:37:38.718 23:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2659270 00:37:38.718 23:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:37:38.718 23:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:38.718 23:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2659270 00:37:38.718 23:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:38.718 23:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:38.718 23:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2659270' 00:37:38.718 killing process with pid 2659270 00:37:38.718 23:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2659270 00:37:38.718 Received shutdown signal, test time was about 2.000000 seconds 00:37:38.718 00:37:38.718 Latency(us) 00:37:38.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:38.718 =================================================================================================================== 00:37:38.718 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:38.718 23:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2659270 00:37:39.655 23:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:37:39.655 23:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:39.655 23:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:39.655 23:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:39.655 23:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:39.656 23:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2659977 00:37:39.656 23:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2659977 /var/tmp/bperf.sock 00:37:39.656 23:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:37:39.656 23:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2659977 ']' 00:37:39.656 23:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:39.656 23:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:39.656 23:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:39.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:39.656 23:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:39.656 23:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:39.914 [2024-07-10 23:41:48.746598] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:37:39.914 [2024-07-10 23:41:48.746692] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2659977 ] 00:37:39.914 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:39.914 Zero copy mechanism will not be used. 00:37:39.914 EAL: No free 2048 kB hugepages reported on node 1 00:37:39.914 [2024-07-10 23:41:48.849118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.173 [2024-07-10 23:41:49.072185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:40.743 23:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:40.743 23:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:37:40.743 23:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:40.743 23:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:40.743 23:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:40.743 23:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.743 23:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:40.743 23:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.743 23:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:40.743 23:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:41.313 nvme0n1 00:37:41.313 23:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:41.313 23:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.313 23:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:41.313 23:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.313 23:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:41.313 23:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:41.313 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:41.313 Zero copy mechanism will not be used. 00:37:41.313 Running I/O for 2 seconds... 00:37:41.313 [2024-07-10 23:41:50.245874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.313 [2024-07-10 23:41:50.245918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.313 [2024-07-10 23:41:50.245934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.313 [2024-07-10 23:41:50.255105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.313 [2024-07-10 23:41:50.255141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.313 [2024-07-10 23:41:50.255156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.313 [2024-07-10 23:41:50.263851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.313 [2024-07-10 23:41:50.263882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.313 [2024-07-10 23:41:50.263896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.313 [2024-07-10 23:41:50.273009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.313 [2024-07-10 23:41:50.273040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.313 [2024-07-10 23:41:50.273054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.313 [2024-07-10 23:41:50.281269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.313 [2024-07-10 23:41:50.281298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.313 [2024-07-10 23:41:50.281312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.313 [2024-07-10 23:41:50.290248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.313 [2024-07-10 23:41:50.290277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.313 [2024-07-10 23:41:50.290290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.313 [2024-07-10 23:41:50.298629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.313 [2024-07-10 23:41:50.298657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.313 [2024-07-10 23:41:50.298670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.313 [2024-07-10 23:41:50.307374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.313 [2024-07-10 23:41:50.307403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.313 [2024-07-10 23:41:50.307416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.313 [2024-07-10 23:41:50.316157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.313 [2024-07-10 23:41:50.316194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.313 [2024-07-10 23:41:50.316218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.313 [2024-07-10 23:41:50.324909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.313 [2024-07-10 23:41:50.324938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.313 [2024-07-10 23:41:50.324951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.313 [2024-07-10 23:41:50.333715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.313 [2024-07-10 23:41:50.333745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.313 [2024-07-10 23:41:50.333763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.313 [2024-07-10 23:41:50.343456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.313 [2024-07-10 23:41:50.343484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.313 [2024-07-10 23:41:50.343497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.313 [2024-07-10 23:41:50.352678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.313 [2024-07-10 23:41:50.352707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.313 [2024-07-10 23:41:50.352720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.313 [2024-07-10 23:41:50.362095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.313 [2024-07-10 23:41:50.362124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.313 [2024-07-10 23:41:50.362137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.313 [2024-07-10 23:41:50.369806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.313 [2024-07-10 23:41:50.369834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.313 [2024-07-10 23:41:50.369847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.313 [2024-07-10 23:41:50.377020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.313 [2024-07-10 23:41:50.377048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.313 [2024-07-10 23:41:50.377060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.574 [2024-07-10 23:41:50.384551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.574 [2024-07-10 23:41:50.384581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.574 [2024-07-10 23:41:50.384594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.574 [2024-07-10 23:41:50.392069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.574 [2024-07-10 23:41:50.392097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.574 [2024-07-10 23:41:50.392110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.574 [2024-07-10 23:41:50.399486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.574 [2024-07-10 23:41:50.399516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.574 [2024-07-10 23:41:50.399529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.574 [2024-07-10 23:41:50.407065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.574 [2024-07-10 23:41:50.407094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.574 [2024-07-10 23:41:50.407106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.574 [2024-07-10 23:41:50.414392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.574 [2024-07-10 23:41:50.414421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.574 [2024-07-10 23:41:50.414434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.574 [2024-07-10 23:41:50.421607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.421634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.421647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.428459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.428490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.428503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.435655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.435685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.435698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.441718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.441746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.441759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.447514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.447543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.447555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.453352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.453380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.453392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.460307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.460336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.460353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.468239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.468267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.468280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.473070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.473097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.473109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.479209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.479237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.479249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.487104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.487132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.487144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.495214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.495242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.495256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.503570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.503599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.503611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.512549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.512578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.512590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.521910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.521938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.521951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.531023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.531050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.531063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.540184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.540213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.540226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.549502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.549531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.549544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.559778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.559807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.559819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.568878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.568906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.568918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.578400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.578428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.578440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.586356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.586384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.586396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.595293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.595322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.595334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.605665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.605693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.605710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.614928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.614957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.614970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.624234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.624262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.624275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.575 [2024-07-10 23:41:50.634256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.575 [2024-07-10 23:41:50.634285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.575 [2024-07-10 23:41:50.634298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.644053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.644083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.644097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.653345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.653376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.653390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.663385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.663415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.663428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.672247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.672286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.672299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.682213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.682242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.682255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.692710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.692744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.692757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.702917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.702946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.702959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.712881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.712910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.712923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.722336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.722365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.722377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.732354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.732382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.732395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.742047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.742075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.742087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.751605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.751634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.751647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.760993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.761022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.761034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.771015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.771046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.771063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.780855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.780885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.780899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.790715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.790744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.790758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.800405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.800434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.800447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.810186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.810215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.810229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.820037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.820068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.820081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.830242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.830271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.868 [2024-07-10 23:41:50.830284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.868 [2024-07-10 23:41:50.839838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.868 [2024-07-10 23:41:50.839867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.869 [2024-07-10 23:41:50.839880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.869 [2024-07-10 23:41:50.849042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.869 [2024-07-10 23:41:50.849070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.869 [2024-07-10 23:41:50.849083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.869 [2024-07-10 23:41:50.858990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.869 [2024-07-10 23:41:50.859024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.869 [2024-07-10 23:41:50.859037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.869 [2024-07-10 23:41:50.869336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.869 [2024-07-10 23:41:50.869365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.869 [2024-07-10 23:41:50.869378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.869 [2024-07-10 23:41:50.878228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.869 [2024-07-10 23:41:50.878257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.869 [2024-07-10 23:41:50.878269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.869 [2024-07-10 23:41:50.887653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.869 [2024-07-10 23:41:50.887681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.869 [2024-07-10 23:41:50.887694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.869 [2024-07-10 23:41:50.897154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.869 [2024-07-10 23:41:50.897189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.869 [2024-07-10 23:41:50.897201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.869 [2024-07-10 23:41:50.905815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.869 [2024-07-10 23:41:50.905843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.869 [2024-07-10 23:41:50.905856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.869 [2024-07-10 23:41:50.915246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.869 [2024-07-10 23:41:50.915274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.869 [2024-07-10 23:41:50.915286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.869 [2024-07-10 23:41:50.923755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.869 [2024-07-10 23:41:50.923783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.869 [2024-07-10 23:41:50.923797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.869 [2024-07-10 23:41:50.932356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.869 [2024-07-10 23:41:50.932383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.869 [2024-07-10 23:41:50.932396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.127 [2024-07-10 23:41:50.940317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.127 [2024-07-10 23:41:50.940345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.127 [2024-07-10 23:41:50.940359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.127 [2024-07-10 23:41:50.948804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.127 [2024-07-10 23:41:50.948832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.127 [2024-07-10 23:41:50.948844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.127 [2024-07-10 23:41:50.956748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.127 [2024-07-10 23:41:50.956776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.127 [2024-07-10 23:41:50.956789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.127 [2024-07-10 23:41:50.965264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.127 [2024-07-10 23:41:50.965292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.127 [2024-07-10 23:41:50.965305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.127 [2024-07-10 23:41:50.972962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.127 [2024-07-10 23:41:50.972988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.127 [2024-07-10 23:41:50.973000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.127 [2024-07-10 23:41:50.980748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.127 [2024-07-10 23:41:50.980776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.127 [2024-07-10 23:41:50.980789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.127 [2024-07-10 23:41:50.988423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.127 [2024-07-10 23:41:50.988452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.127 [2024-07-10 23:41:50.988465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.127 [2024-07-10 23:41:50.995863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.127 [2024-07-10 23:41:50.995892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.127 [2024-07-10 23:41:50.995904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.127 [2024-07-10 23:41:51.003435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.127 [2024-07-10 23:41:51.003467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.127 [2024-07-10 23:41:51.003480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.127 [2024-07-10 23:41:51.011099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.127 [2024-07-10 23:41:51.011127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.011140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.018991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.019019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.019032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.026484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.026512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.026524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.034348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.034374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.034386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.041802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.041829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.041840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.049121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.049148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.049166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.056399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.056426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.056438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.063436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.063463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.063475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.069837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.069865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.069877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.077536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.077564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.077577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.085597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.085626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.085638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.093098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.093126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.093138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.099549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.099577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.099590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.106591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.106618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.106630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.113794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.113822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.113835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.120391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.120418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.120431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.126846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.126877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.126889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.133399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.133426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.133438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.139968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.139995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.140007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.146645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.146672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.146684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.152796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.152824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.152836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.158965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.158993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.159005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.165202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.165228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.165240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.171213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.171239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.171251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.177149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.177181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.177193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.183323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.183350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.183362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.128 [2024-07-10 23:41:51.189823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.128 [2024-07-10 23:41:51.189851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.128 [2024-07-10 23:41:51.189863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.387 [2024-07-10 23:41:51.196801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.387 [2024-07-10 23:41:51.196831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.387 [2024-07-10 23:41:51.196844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.387 [2024-07-10 23:41:51.203626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.387 [2024-07-10 23:41:51.203654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.387 [2024-07-10 23:41:51.203667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.387 [2024-07-10 23:41:51.209767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.387 [2024-07-10 23:41:51.209794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.387 [2024-07-10 23:41:51.209807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.387 [2024-07-10 23:41:51.215966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.387 [2024-07-10 23:41:51.215993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.387 [2024-07-10 23:41:51.216005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.387 [2024-07-10 23:41:51.222185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.387 [2024-07-10 23:41:51.222212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.387 [2024-07-10 23:41:51.222224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.387 [2024-07-10 23:41:51.228317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.387 [2024-07-10 23:41:51.228344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.387 [2024-07-10 23:41:51.228355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.387 [2024-07-10 23:41:51.234582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.387 [2024-07-10 23:41:51.234610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.387 [2024-07-10 23:41:51.234626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.387 [2024-07-10 23:41:51.241029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.387 [2024-07-10 23:41:51.241056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.387 [2024-07-10 23:41:51.241068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.387 [2024-07-10 23:41:51.247275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.387 [2024-07-10 23:41:51.247301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.387 [2024-07-10 23:41:51.247314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.250714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.250740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.250753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.256788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.256814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.256826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.262988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.263015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.263027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.269080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.269106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.269118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.275202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.275228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.275241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.281460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.281487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.281499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.287897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.287923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.287935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.294335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.294360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.294372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.300627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.300653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.300665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.306920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.306946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.306958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.313230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.313255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.313267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.319487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.319514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.319525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.325820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.325846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.325868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.331971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.331997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.332009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.338361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.338387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.338402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.344694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.344720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.344732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.350928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.350955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.350967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.357217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.357243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.357255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.363383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.363409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.363421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.369574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.369601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.369614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.375902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.375929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.375941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.382388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.382414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.382426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.388737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.388764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.388776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.395131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.395158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.395176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.401484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.401511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.401523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.407359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.407385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.407397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.413536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.413562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.413573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.419802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.419828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.388 [2024-07-10 23:41:51.419840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.388 [2024-07-10 23:41:51.426079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.388 [2024-07-10 23:41:51.426105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.389 [2024-07-10 23:41:51.426116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.389 [2024-07-10 23:41:51.432344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.389 [2024-07-10 23:41:51.432383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.389 [2024-07-10 23:41:51.432394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.389 [2024-07-10 23:41:51.438695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.389 [2024-07-10 23:41:51.438722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.389 [2024-07-10 23:41:51.438734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.389 [2024-07-10 23:41:51.444993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.389 [2024-07-10 23:41:51.445020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.389 [2024-07-10 23:41:51.445036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.389 [2024-07-10 23:41:51.451244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.389 [2024-07-10 23:41:51.451271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.389 [2024-07-10 23:41:51.451284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.647 [2024-07-10 23:41:51.457563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.647 [2024-07-10 23:41:51.457591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.647 [2024-07-10 23:41:51.457603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.647 [2024-07-10 23:41:51.464299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.647 [2024-07-10 23:41:51.464325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.647 [2024-07-10 23:41:51.464337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.647 [2024-07-10 23:41:51.471307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.647 [2024-07-10 23:41:51.471334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.647 [2024-07-10 23:41:51.471346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.647 [2024-07-10 23:41:51.478269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.647 [2024-07-10 23:41:51.478294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.647 [2024-07-10 23:41:51.478306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.647 [2024-07-10 23:41:51.484117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.647 [2024-07-10 23:41:51.484144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.647 [2024-07-10 23:41:51.484157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.647 [2024-07-10 23:41:51.490354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.647 [2024-07-10 23:41:51.490381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.647 [2024-07-10 23:41:51.490393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.647 [2024-07-10 23:41:51.496561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.647 [2024-07-10 23:41:51.496587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.647 [2024-07-10 23:41:51.496599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.647 [2024-07-10 23:41:51.502796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.647 [2024-07-10 23:41:51.502826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.647 [2024-07-10 23:41:51.502837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.647 [2024-07-10 23:41:51.508968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.647 [2024-07-10 23:41:51.508995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.647 [2024-07-10 23:41:51.509007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.647 [2024-07-10 23:41:51.515153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.647 [2024-07-10 23:41:51.515185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.647 [2024-07-10 23:41:51.515197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.647 [2024-07-10 23:41:51.521524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.647 [2024-07-10 23:41:51.521551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.647 [2024-07-10 23:41:51.521562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.647 [2024-07-10 23:41:51.527929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.647 [2024-07-10 23:41:51.527955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.647 [2024-07-10 23:41:51.527967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.647 [2024-07-10 23:41:51.534552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.647 [2024-07-10 23:41:51.534579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.647 [2024-07-10 23:41:51.534591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.647 [2024-07-10 23:41:51.542581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.647 [2024-07-10 23:41:51.542608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.647 [2024-07-10 23:41:51.542620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.647 [2024-07-10 23:41:51.550508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.550535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.550548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.558085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.558113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.558132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.565492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.565520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.565532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.572841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.572868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.572880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.579751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.579779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.579791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.586885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.586912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.586924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.594012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.594040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.594052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.600400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.600428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.600441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.607523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.607561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.607574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.614648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.614676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.614689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.621407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.621439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.621452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.628511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.628538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.628550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.635772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.635800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.635813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.642908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.642935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.642947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.650011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.650039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.650051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.657144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.657188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.657201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.664385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.664413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.664427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.671522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.671551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.671564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.678754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.678783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.678801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.686238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.686267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.686279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.693567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.693595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.693607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.701153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.701187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.701199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.648 [2024-07-10 23:41:51.708723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.648 [2024-07-10 23:41:51.708751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.648 [2024-07-10 23:41:51.708763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.908 [2024-07-10 23:41:51.715609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.908 [2024-07-10 23:41:51.715638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.908 [2024-07-10 23:41:51.715651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.908 [2024-07-10 23:41:51.719995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.908 [2024-07-10 23:41:51.720020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.908 [2024-07-10 23:41:51.720033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.908 [2024-07-10 23:41:51.725137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.908 [2024-07-10 23:41:51.725171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.908 [2024-07-10 23:41:51.725199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.908 [2024-07-10 23:41:51.731615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.908 [2024-07-10 23:41:51.731642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.908 [2024-07-10 23:41:51.731654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.908 [2024-07-10 23:41:51.737908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.908 [2024-07-10 23:41:51.737939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.908 [2024-07-10 23:41:51.737951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.908 [2024-07-10 23:41:51.744123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.908 [2024-07-10 23:41:51.744150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.908 [2024-07-10 23:41:51.744168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.908 [2024-07-10 23:41:51.750395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.908 [2024-07-10 23:41:51.750421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.908 [2024-07-10 23:41:51.750434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.908 [2024-07-10 23:41:51.756470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.908 [2024-07-10 23:41:51.756497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.756508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.762502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.762528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.762540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.768473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.768500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.768512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.774742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.774768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.774780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.780904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.780931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.780942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.787029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.787054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.787067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.793084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.793111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.793123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.799023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.799050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.799062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.805320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.805346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.805357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.811615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.811642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.811653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.817848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.817874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.817887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.824127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.824153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.824170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.830251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.830278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.830289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.836484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.836512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.836525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.842808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.842838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.842851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.849755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.849781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.849793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.858346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.858374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.858386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.866631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.866659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.866671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.875362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.875391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.875403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.883203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.883232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.883245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.890987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.891015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.891028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.898467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.898496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.898509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.909 [2024-07-10 23:41:51.906644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.909 [2024-07-10 23:41:51.906673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.909 [2024-07-10 23:41:51.906686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.910 [2024-07-10 23:41:51.913501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.910 [2024-07-10 23:41:51.913530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.910 [2024-07-10 23:41:51.913543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.910 [2024-07-10 23:41:51.921032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.910 [2024-07-10 23:41:51.921061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.910 [2024-07-10 23:41:51.921074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.910 [2024-07-10 23:41:51.928500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.910 [2024-07-10 23:41:51.928528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.910 [2024-07-10 23:41:51.928541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.910 [2024-07-10 23:41:51.936645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.910 [2024-07-10 23:41:51.936673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.910 [2024-07-10 23:41:51.936686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.910 [2024-07-10 23:41:51.944352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.910 [2024-07-10 23:41:51.944380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.910 [2024-07-10 23:41:51.944392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.910 [2024-07-10 23:41:51.951976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.910 [2024-07-10 23:41:51.952005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.910 [2024-07-10 23:41:51.952017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.910 [2024-07-10 23:41:51.958716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.910 [2024-07-10 23:41:51.958744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.910 [2024-07-10 23:41:51.958757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.910 [2024-07-10 23:41:51.965313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.910 [2024-07-10 23:41:51.965341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.910 [2024-07-10 23:41:51.965353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.910 [2024-07-10 23:41:51.971981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.910 [2024-07-10 23:41:51.972013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.910 [2024-07-10 23:41:51.972025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.170 [2024-07-10 23:41:51.978905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.170 [2024-07-10 23:41:51.978935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.170 [2024-07-10 23:41:51.978948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.170 [2024-07-10 23:41:51.986134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.170 [2024-07-10 23:41:51.986170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.170 [2024-07-10 23:41:51.986183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.170 [2024-07-10 23:41:51.993514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.170 [2024-07-10 23:41:51.993543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.170 [2024-07-10 23:41:51.993556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.170 [2024-07-10 23:41:52.000776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.170 [2024-07-10 23:41:52.000806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.170 [2024-07-10 23:41:52.000819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.170 [2024-07-10 23:41:52.007982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.170 [2024-07-10 23:41:52.008010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.170 [2024-07-10 23:41:52.008022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.170 [2024-07-10 23:41:52.015087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.170 [2024-07-10 23:41:52.015113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.170 [2024-07-10 23:41:52.015126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.170 [2024-07-10 23:41:52.022174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.170 [2024-07-10 23:41:52.022202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.170 [2024-07-10 23:41:52.022214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.170 [2024-07-10 23:41:52.029132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.170 [2024-07-10 23:41:52.029167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.170 [2024-07-10 23:41:52.029180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.170 [2024-07-10 23:41:52.036040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.170 [2024-07-10 23:41:52.036068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.170 [2024-07-10 23:41:52.036081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.170 [2024-07-10 23:41:52.042860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.170 [2024-07-10 23:41:52.042888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.170 [2024-07-10 23:41:52.042901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.170 [2024-07-10 23:41:52.049484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.170 [2024-07-10 23:41:52.049512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.170 [2024-07-10 23:41:52.049524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.170 [2024-07-10 23:41:52.053820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.170 [2024-07-10 23:41:52.053846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.170 [2024-07-10 23:41:52.053859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.170 [2024-07-10 23:41:52.058763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.170 [2024-07-10 23:41:52.058790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.170 [2024-07-10 23:41:52.058803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.170 [2024-07-10 23:41:52.065057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.170 [2024-07-10 23:41:52.065084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.170 [2024-07-10 23:41:52.065097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.170 [2024-07-10 23:41:52.071194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.170 [2024-07-10 23:41:52.071221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.170 [2024-07-10 23:41:52.071234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.170 [2024-07-10 23:41:52.077523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.170 [2024-07-10 23:41:52.077552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.170 [2024-07-10 23:41:52.077565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.083895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.083923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.083940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.090324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.090351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.090363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.096692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.096720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.096732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.102716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.102743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.102756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.108879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.108906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.108918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.114849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.114877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.114889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.120690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.120716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.120729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.126691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.126719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.126731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.132967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.132994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.133006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.139316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.139343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.139355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.145516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.145543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.145555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.151804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.151831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.151844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.158280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.158308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.158320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.164675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.164703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.164715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.170843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.170872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.170884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.177305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.177345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.177366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.184830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.184859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.184872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.191988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.192016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.192032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.199439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.199467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.199479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.206743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.206772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.206785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.213900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.213929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.213942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.222038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.222068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.222081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.171 [2024-07-10 23:41:52.231235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.171 [2024-07-10 23:41:52.231264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.171 [2024-07-10 23:41:52.231276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.431 [2024-07-10 23:41:52.240181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.431 [2024-07-10 23:41:52.240211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.431 [2024-07-10 23:41:52.240225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.431 00:37:43.431 Latency(us) 00:37:43.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:43.431 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:43.431 nvme0n1 : 2.00 4229.56 528.69 0.00 0.00 3778.77 733.72 10656.72 00:37:43.431 =================================================================================================================== 00:37:43.431 Total : 4229.56 528.69 0.00 0.00 3778.77 733.72 10656.72 00:37:43.431 0 00:37:43.431 23:41:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:43.431 23:41:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:43.431 23:41:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:43.431 | .driver_specific 00:37:43.431 | .nvme_error 00:37:43.431 | .status_code 00:37:43.431 | .command_transient_transport_error' 00:37:43.431 23:41:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:43.431 23:41:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 273 > 0 )) 00:37:43.431 23:41:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2659977 00:37:43.431 23:41:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2659977 ']' 00:37:43.431 23:41:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2659977 00:37:43.431 23:41:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:37:43.431 23:41:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:43.431 23:41:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2659977 00:37:43.431 23:41:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:43.431 23:41:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:43.431 23:41:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2659977' 00:37:43.431 killing process with pid 2659977 00:37:43.431 23:41:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2659977 00:37:43.431 Received shutdown signal, test time was about 2.000000 seconds 00:37:43.431 00:37:43.431 Latency(us) 00:37:43.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:43.431 =================================================================================================================== 00:37:43.431 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:43.431 23:41:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2659977 00:37:44.810 23:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:37:44.810 23:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:44.810 23:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:44.810 23:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:44.810 23:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:44.810 23:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2660775 00:37:44.810 23:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2660775 /var/tmp/bperf.sock 00:37:44.810 23:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:37:44.810 23:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2660775 ']' 00:37:44.810 23:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:44.810 23:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:44.810 23:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:44.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:44.810 23:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:44.810 23:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:44.810 [2024-07-10 23:41:53.607606] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:37:44.810 [2024-07-10 23:41:53.607700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2660775 ] 00:37:44.810 EAL: No free 2048 kB hugepages reported on node 1 00:37:44.810 [2024-07-10 23:41:53.712066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:45.069 [2024-07-10 23:41:53.931047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:45.328 23:41:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:45.328 23:41:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:37:45.328 23:41:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:45.328 23:41:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:45.588 23:41:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:45.588 23:41:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.588 23:41:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:45.588 23:41:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.588 23:41:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:45.588 23:41:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:45.847 nvme0n1 00:37:45.847 23:41:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:45.847 23:41:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.847 23:41:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:45.847 23:41:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.847 23:41:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:45.847 23:41:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:45.847 Running I/O for 2 seconds... 00:37:45.847 [2024-07-10 23:41:54.906564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed0b0 00:37:45.847 [2024-07-10 23:41:54.907512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.847 [2024-07-10 23:41:54.907549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:54.917887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:37:46.106 [2024-07-10 23:41:54.919020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:54.919052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:54.928600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:37:46.106 [2024-07-10 23:41:54.929727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:54.929754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:54.940551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc128 00:37:46.106 [2024-07-10 23:41:54.942132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:54.942164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:54.951428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:37:46.106 [2024-07-10 23:41:54.953091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:54.953118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:54.960287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:37:46.106 [2024-07-10 23:41:54.961049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:54.961074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:54.972251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:37:46.106 [2024-07-10 23:41:54.973790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:54.973816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:54.981229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e27f0 00:37:46.106 [2024-07-10 23:41:54.982221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:54.982248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:54.991651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd640 00:37:46.106 [2024-07-10 23:41:54.992624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:54.992650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:55.002218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef270 00:37:46.106 [2024-07-10 23:41:55.003192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:55.003218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:55.012770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee190 00:37:46.106 [2024-07-10 23:41:55.013749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:55.013774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:55.024792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed0b0 00:37:46.106 [2024-07-10 23:41:55.026240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:55.026265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:55.035859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0350 00:37:46.106 [2024-07-10 23:41:55.037443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:55.037470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:55.046961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e95a0 00:37:46.106 [2024-07-10 23:41:55.048774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:55.048800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:55.054454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:37:46.106 [2024-07-10 23:41:55.055265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:55.055290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:55.064980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0630 00:37:46.106 [2024-07-10 23:41:55.065932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:55.065956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:55.076081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:37:46.106 [2024-07-10 23:41:55.077100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:55.077125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:55.087183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fdeb0 00:37:46.106 [2024-07-10 23:41:55.088423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:55.088448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:55.097970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1f80 00:37:46.106 [2024-07-10 23:41:55.099191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:55.099216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:55.108885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:37:46.106 [2024-07-10 23:41:55.110112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:55.110137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:46.106 [2024-07-10 23:41:55.119418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0bc0 00:37:46.106 [2024-07-10 23:41:55.120768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.106 [2024-07-10 23:41:55.120796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:46.107 [2024-07-10 23:41:55.130231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:37:46.107 [2024-07-10 23:41:55.131601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.107 [2024-07-10 23:41:55.131627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:46.107 [2024-07-10 23:41:55.141104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:37:46.107 [2024-07-10 23:41:55.142481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.107 [2024-07-10 23:41:55.142507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:46.107 [2024-07-10 23:41:55.149767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7970 00:37:46.107 [2024-07-10 23:41:55.150596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.107 [2024-07-10 23:41:55.150621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:46.107 [2024-07-10 23:41:55.160342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:37:46.107 [2024-07-10 23:41:55.161107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.107 [2024-07-10 23:41:55.161135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:46.107 [2024-07-10 23:41:55.171200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:37:46.107 [2024-07-10 23:41:55.171958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.107 [2024-07-10 23:41:55.171984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.182038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6890 00:37:46.366 [2024-07-10 23:41:55.182932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.182957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.192713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:37:46.366 [2024-07-10 23:41:55.193592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.193618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.203340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:37:46.366 [2024-07-10 23:41:55.204164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.204206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.213976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:37:46.366 [2024-07-10 23:41:55.214826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.214852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.224532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:37:46.366 [2024-07-10 23:41:55.225419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.225443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.235126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fbcf0 00:37:46.366 [2024-07-10 23:41:55.235942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.235967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.245726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9e10 00:37:46.366 [2024-07-10 23:41:55.246596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.246621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.256618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3e60 00:37:46.366 [2024-07-10 23:41:55.257392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.257418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.266735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:37:46.366 [2024-07-10 23:41:55.267476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.267501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.277988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:37:46.366 [2024-07-10 23:41:55.278869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.278894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.289072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee190 00:37:46.366 [2024-07-10 23:41:55.290097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.290122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.300285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:37:46.366 [2024-07-10 23:41:55.301410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.301438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.311400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:37:46.366 [2024-07-10 23:41:55.312749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.312774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.322452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:37:46.366 [2024-07-10 23:41:55.323948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.323973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.331910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1430 00:37:46.366 [2024-07-10 23:41:55.332911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.332937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.343933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1430 00:37:46.366 [2024-07-10 23:41:55.345415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.345439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.353873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:37:46.366 [2024-07-10 23:41:55.354958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.354983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.364345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6890 00:37:46.366 [2024-07-10 23:41:55.365476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.365501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.374198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0bc0 00:37:46.366 [2024-07-10 23:41:55.375269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.375294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.386005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:37:46.366 [2024-07-10 23:41:55.387146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.366 [2024-07-10 23:41:55.387176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:46.366 [2024-07-10 23:41:55.396943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6890 00:37:46.367 [2024-07-10 23:41:55.398381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.367 [2024-07-10 23:41:55.398406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:46.367 [2024-07-10 23:41:55.407001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:37:46.367 [2024-07-10 23:41:55.408381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.367 [2024-07-10 23:41:55.408406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.367 [2024-07-10 23:41:55.416869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9b30 00:37:46.367 [2024-07-10 23:41:55.417762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.367 [2024-07-10 23:41:55.417788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:46.367 [2024-07-10 23:41:55.426819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:37:46.367 [2024-07-10 23:41:55.427788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.367 [2024-07-10 23:41:55.427813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:37:46.625 [2024-07-10 23:41:55.438251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:37:46.625 [2024-07-10 23:41:55.439360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.625 [2024-07-10 23:41:55.439386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:46.625 [2024-07-10 23:41:55.449433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:37:46.625 [2024-07-10 23:41:55.450643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.625 [2024-07-10 23:41:55.450669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:37:46.625 [2024-07-10 23:41:55.459229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e49b0 00:37:46.625 [2024-07-10 23:41:55.459922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.625 [2024-07-10 23:41:55.459947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:46.625 [2024-07-10 23:41:55.470150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:37:46.625 [2024-07-10 23:41:55.470704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.625 [2024-07-10 23:41:55.470729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:46.625 [2024-07-10 23:41:55.482376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:37:46.625 [2024-07-10 23:41:55.483893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.625 [2024-07-10 23:41:55.483917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:46.625 [2024-07-10 23:41:55.492216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:37:46.625 [2024-07-10 23:41:55.493184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.625 [2024-07-10 23:41:55.493226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:46.625 [2024-07-10 23:41:55.503028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:37:46.625 [2024-07-10 23:41:55.503866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.625 [2024-07-10 23:41:55.503891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:46.625 [2024-07-10 23:41:55.513802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:37:46.625 [2024-07-10 23:41:55.514962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.625 [2024-07-10 23:41:55.514986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:46.625 [2024-07-10 23:41:55.524715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:37:46.625 [2024-07-10 23:41:55.526105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.625 [2024-07-10 23:41:55.526129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.625 [2024-07-10 23:41:55.533443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:37:46.625 [2024-07-10 23:41:55.534165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.625 [2024-07-10 23:41:55.534191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:46.625 [2024-07-10 23:41:55.544389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:37:46.625 [2024-07-10 23:41:55.545362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.625 [2024-07-10 23:41:55.545387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:46.625 [2024-07-10 23:41:55.556126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:37:46.625 [2024-07-10 23:41:55.557485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.625 [2024-07-10 23:41:55.557509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:46.625 [2024-07-10 23:41:55.567089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:37:46.626 [2024-07-10 23:41:55.568441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.626 [2024-07-10 23:41:55.568466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:46.626 [2024-07-10 23:41:55.575747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:37:46.626 [2024-07-10 23:41:55.576593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.626 [2024-07-10 23:41:55.576618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:46.626 [2024-07-10 23:41:55.586607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:37:46.626 [2024-07-10 23:41:55.587132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.626 [2024-07-10 23:41:55.587157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:46.626 [2024-07-10 23:41:55.597878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:37:46.626 [2024-07-10 23:41:55.598590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.626 [2024-07-10 23:41:55.598614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:46.626 [2024-07-10 23:41:55.609007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4140 00:37:46.626 [2024-07-10 23:41:55.609853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.626 [2024-07-10 23:41:55.609878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:46.626 [2024-07-10 23:41:55.619031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe720 00:37:46.626 [2024-07-10 23:41:55.620484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.626 [2024-07-10 23:41:55.620509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:46.626 [2024-07-10 23:41:55.628168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:37:46.626 [2024-07-10 23:41:55.628932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.626 [2024-07-10 23:41:55.628956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:37:46.626 [2024-07-10 23:41:55.639975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef270 00:37:46.626 [2024-07-10 23:41:55.640804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.626 [2024-07-10 23:41:55.640829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:46.626 [2024-07-10 23:41:55.650892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb480 00:37:46.626 [2024-07-10 23:41:55.651982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.626 [2024-07-10 23:41:55.652007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:46.626 [2024-07-10 23:41:55.662959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f81e0 00:37:46.626 [2024-07-10 23:41:55.664593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.626 [2024-07-10 23:41:55.664617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:46.626 [2024-07-10 23:41:55.672791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:37:46.626 [2024-07-10 23:41:55.673913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.626 [2024-07-10 23:41:55.673938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:46.626 [2024-07-10 23:41:55.682585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:37:46.626 [2024-07-10 23:41:55.684228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.626 [2024-07-10 23:41:55.684253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:46.626 [2024-07-10 23:41:55.692105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:37:46.885 [2024-07-10 23:41:55.692885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.885 [2024-07-10 23:41:55.692911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:37:46.885 [2024-07-10 23:41:55.704055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e38d0 00:37:46.885 [2024-07-10 23:41:55.705003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.885 [2024-07-10 23:41:55.705029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:46.885 [2024-07-10 23:41:55.713943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:37:46.885 [2024-07-10 23:41:55.714881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.885 [2024-07-10 23:41:55.714906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:46.885 [2024-07-10 23:41:55.725785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:37:46.885 [2024-07-10 23:41:55.726878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.885 [2024-07-10 23:41:55.726903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.885 [2024-07-10 23:41:55.736646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:37:46.885 [2024-07-10 23:41:55.737444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.885 [2024-07-10 23:41:55.737469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:46.885 [2024-07-10 23:41:55.746515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:37:46.885 [2024-07-10 23:41:55.748138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.885 [2024-07-10 23:41:55.748168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:46.885 [2024-07-10 23:41:55.757640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f92c0 00:37:46.885 [2024-07-10 23:41:55.759410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.885 [2024-07-10 23:41:55.759438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.885 [2024-07-10 23:41:55.767655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:37:46.885 [2024-07-10 23:41:55.768464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.885 [2024-07-10 23:41:55.768489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:46.885 [2024-07-10 23:41:55.778563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9b30 00:37:46.885 [2024-07-10 23:41:55.779649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.885 [2024-07-10 23:41:55.779674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:46.885 [2024-07-10 23:41:55.788616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe720 00:37:46.885 [2024-07-10 23:41:55.789642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.885 [2024-07-10 23:41:55.789667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.885 [2024-07-10 23:41:55.800432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:37:46.885 [2024-07-10 23:41:55.801553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.885 [2024-07-10 23:41:55.801578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:46.885 [2024-07-10 23:41:55.811447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:37:46.885 [2024-07-10 23:41:55.812764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.885 [2024-07-10 23:41:55.812788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:46.885 [2024-07-10 23:41:55.822009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:37:46.886 [2024-07-10 23:41:55.823521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.886 [2024-07-10 23:41:55.823546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:46.886 [2024-07-10 23:41:55.831869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:37:46.886 [2024-07-10 23:41:55.832836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.886 [2024-07-10 23:41:55.832861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.886 [2024-07-10 23:41:55.842811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195efae0 00:37:46.886 [2024-07-10 23:41:55.843647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.886 [2024-07-10 23:41:55.843672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:46.886 [2024-07-10 23:41:55.855024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:37:46.886 [2024-07-10 23:41:55.856723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.886 [2024-07-10 23:41:55.856748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:46.886 [2024-07-10 23:41:55.862525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:37:46.886 [2024-07-10 23:41:55.863316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.886 [2024-07-10 23:41:55.863341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:46.886 [2024-07-10 23:41:55.874597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:37:46.886 [2024-07-10 23:41:55.876225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.886 [2024-07-10 23:41:55.876249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.886 [2024-07-10 23:41:55.883911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:37:46.886 [2024-07-10 23:41:55.884802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.886 [2024-07-10 23:41:55.884827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:37:46.886 [2024-07-10 23:41:55.895721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de470 00:37:46.886 [2024-07-10 23:41:55.896674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.886 [2024-07-10 23:41:55.896699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.886 [2024-07-10 23:41:55.906636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:37:46.886 [2024-07-10 23:41:55.907861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.886 [2024-07-10 23:41:55.907885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.886 [2024-07-10 23:41:55.916814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb480 00:37:46.886 [2024-07-10 23:41:55.917978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.886 [2024-07-10 23:41:55.918003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:37:46.886 [2024-07-10 23:41:55.928645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:37:46.886 [2024-07-10 23:41:55.929928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.886 [2024-07-10 23:41:55.929964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.886 [2024-07-10 23:41:55.939840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:37:46.886 [2024-07-10 23:41:55.941364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.886 [2024-07-10 23:41:55.941391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.886 [2024-07-10 23:41:55.950048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:37:46.886 [2024-07-10 23:41:55.951547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.886 [2024-07-10 23:41:55.951572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:47.145 [2024-07-10 23:41:55.960057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:37:47.145 [2024-07-10 23:41:55.960990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.145 [2024-07-10 23:41:55.961015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:47.145 [2024-07-10 23:41:55.970906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:37:47.145 [2024-07-10 23:41:55.971710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.145 [2024-07-10 23:41:55.971735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:47.145 [2024-07-10 23:41:55.980860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:37:47.145 [2024-07-10 23:41:55.982333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.145 [2024-07-10 23:41:55.982357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:47.145 [2024-07-10 23:41:55.991980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:37:47.145 [2024-07-10 23:41:55.993565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.145 [2024-07-10 23:41:55.993589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.145 [2024-07-10 23:41:56.001114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed920 00:37:47.145 [2024-07-10 23:41:56.002012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.145 [2024-07-10 23:41:56.002037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:37:47.145 [2024-07-10 23:41:56.012948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:37:47.145 [2024-07-10 23:41:56.013900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.145 [2024-07-10 23:41:56.013926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.145 [2024-07-10 23:41:56.023866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:37:47.145 [2024-07-10 23:41:56.025073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.145 [2024-07-10 23:41:56.025098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.145 [2024-07-10 23:41:56.034702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:37:47.145 [2024-07-10 23:41:56.035819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.145 [2024-07-10 23:41:56.035845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:47.145 [2024-07-10 23:41:56.044498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3060 00:37:47.145 [2024-07-10 23:41:56.045706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.145 [2024-07-10 23:41:56.045731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:47.145 [2024-07-10 23:41:56.055621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:37:47.145 [2024-07-10 23:41:56.056928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.145 [2024-07-10 23:41:56.056953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.145 [2024-07-10 23:41:56.065473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5378 00:37:47.145 [2024-07-10 23:41:56.066298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.145 [2024-07-10 23:41:56.066324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:47.145 [2024-07-10 23:41:56.076270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:37:47.145 [2024-07-10 23:41:56.076905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.145 [2024-07-10 23:41:56.076930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.145 [2024-07-10 23:41:56.087367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:37:47.145 [2024-07-10 23:41:56.088142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.145 [2024-07-10 23:41:56.088171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:47.145 [2024-07-10 23:41:56.099523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:37:47.145 [2024-07-10 23:41:56.101240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.145 [2024-07-10 23:41:56.101264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:47.145 [2024-07-10 23:41:56.109395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa7d8 00:37:47.145 [2024-07-10 23:41:56.110665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.145 [2024-07-10 23:41:56.110689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.145 [2024-07-10 23:41:56.119135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1710 00:37:47.145 [2024-07-10 23:41:56.120661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.146 [2024-07-10 23:41:56.120685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.146 [2024-07-10 23:41:56.128972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e73e0 00:37:47.146 [2024-07-10 23:41:56.129982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.146 [2024-07-10 23:41:56.130007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:47.146 [2024-07-10 23:41:56.139841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de470 00:37:47.146 [2024-07-10 23:41:56.140482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.146 [2024-07-10 23:41:56.140507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:47.146 [2024-07-10 23:41:56.150966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:37:47.146 [2024-07-10 23:41:56.151738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.146 [2024-07-10 23:41:56.151762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:47.146 [2024-07-10 23:41:56.161732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:37:47.146 [2024-07-10 23:41:56.162938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.146 [2024-07-10 23:41:56.162963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:47.146 [2024-07-10 23:41:56.172696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:37:47.146 [2024-07-10 23:41:56.173636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.146 [2024-07-10 23:41:56.173660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.146 [2024-07-10 23:41:56.184915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:37:47.146 [2024-07-10 23:41:56.186830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.146 [2024-07-10 23:41:56.186855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.146 [2024-07-10 23:41:56.192592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:37:47.146 [2024-07-10 23:41:56.193422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.146 [2024-07-10 23:41:56.193447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:47.146 [2024-07-10 23:41:56.203915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:37:47.146 [2024-07-10 23:41:56.204876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.146 [2024-07-10 23:41:56.204901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.214900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1430 00:37:47.404 [2024-07-10 23:41:56.215870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.215899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.224824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee190 00:37:47.404 [2024-07-10 23:41:56.225767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.225791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.235953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa3a0 00:37:47.404 [2024-07-10 23:41:56.237043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.237070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.247087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:37:47.404 [2024-07-10 23:41:56.248399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.248425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.256973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:37:47.404 [2024-07-10 23:41:56.257767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.257792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.268064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0630 00:37:47.404 [2024-07-10 23:41:56.268724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.268750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.279000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa3a0 00:37:47.404 [2024-07-10 23:41:56.279953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.279977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.288821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:37:47.404 [2024-07-10 23:41:56.289813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.289838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.300673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195efae0 00:37:47.404 [2024-07-10 23:41:56.301740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.301766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.311718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9e10 00:37:47.404 [2024-07-10 23:41:56.312917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.312943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.322398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6890 00:37:47.404 [2024-07-10 23:41:56.323641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.323665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.332411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195edd58 00:37:47.404 [2024-07-10 23:41:56.333905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.333930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.341549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7100 00:37:47.404 [2024-07-10 23:41:56.342323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.342349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.352954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:37:47.404 [2024-07-10 23:41:56.353961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.353987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.364606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaef0 00:37:47.404 [2024-07-10 23:41:56.365671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.365698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.376185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:37:47.404 [2024-07-10 23:41:56.377516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.377542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.387893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:37:47.404 [2024-07-10 23:41:56.389370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.389395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.399359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:37:47.404 [2024-07-10 23:41:56.400952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.400982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.410792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:37:47.404 [2024-07-10 23:41:56.412452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.412478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:47.404 [2024-07-10 23:41:56.422146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:37:47.404 [2024-07-10 23:41:56.423921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.404 [2024-07-10 23:41:56.423946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:47.405 [2024-07-10 23:41:56.431458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df550 00:37:47.405 [2024-07-10 23:41:56.432667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.405 [2024-07-10 23:41:56.432692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:47.405 [2024-07-10 23:41:56.442458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:37:47.405 [2024-07-10 23:41:56.443824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.405 [2024-07-10 23:41:56.443849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:47.405 [2024-07-10 23:41:56.451652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed0b0 00:37:47.405 [2024-07-10 23:41:56.452286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.405 [2024-07-10 23:41:56.452312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:47.405 [2024-07-10 23:41:56.462981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eea00 00:37:47.405 [2024-07-10 23:41:56.463750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.405 [2024-07-10 23:41:56.463776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:47.663 [2024-07-10 23:41:56.474309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:37:47.663 [2024-07-10 23:41:56.475233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.663 [2024-07-10 23:41:56.475259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:47.663 [2024-07-10 23:41:56.484392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0630 00:37:47.663 [2024-07-10 23:41:56.485859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.663 [2024-07-10 23:41:56.485884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:47.663 [2024-07-10 23:41:56.493509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f81e0 00:37:47.663 [2024-07-10 23:41:56.494348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.663 [2024-07-10 23:41:56.494373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:37:47.663 [2024-07-10 23:41:56.504632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:37:47.663 [2024-07-10 23:41:56.505649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.663 [2024-07-10 23:41:56.505673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:47.663 [2024-07-10 23:41:56.515770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eea00 00:37:47.663 [2024-07-10 23:41:56.516831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.663 [2024-07-10 23:41:56.516856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:47.663 [2024-07-10 23:41:56.526942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:37:47.663 [2024-07-10 23:41:56.528223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.663 [2024-07-10 23:41:56.528259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:47.663 [2024-07-10 23:41:56.538121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:37:47.663 [2024-07-10 23:41:56.539538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.663 [2024-07-10 23:41:56.539562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:47.663 [2024-07-10 23:41:56.548033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:37:47.663 [2024-07-10 23:41:56.548958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.663 [2024-07-10 23:41:56.548983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:47.663 [2024-07-10 23:41:56.558860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:37:47.663 [2024-07-10 23:41:56.559603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.663 [2024-07-10 23:41:56.559628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:47.663 [2024-07-10 23:41:56.569984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f92c0 00:37:47.663 [2024-07-10 23:41:56.570902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.663 [2024-07-10 23:41:56.570927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:47.663 [2024-07-10 23:41:56.580046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:37:47.663 [2024-07-10 23:41:56.581479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.663 [2024-07-10 23:41:56.581504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:47.663 [2024-07-10 23:41:56.589202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:37:47.663 [2024-07-10 23:41:56.590012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.663 [2024-07-10 23:41:56.590037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:37:47.663 [2024-07-10 23:41:56.600370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:37:47.663 [2024-07-10 23:41:56.601543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.664 [2024-07-10 23:41:56.601567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:47.664 [2024-07-10 23:41:56.611656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:37:47.664 [2024-07-10 23:41:56.612784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.664 [2024-07-10 23:41:56.612810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:47.664 [2024-07-10 23:41:56.622792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:37:47.664 [2024-07-10 23:41:56.624008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.664 [2024-07-10 23:41:56.624033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:47.664 [2024-07-10 23:41:56.633934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:37:47.664 [2024-07-10 23:41:56.635255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.664 [2024-07-10 23:41:56.635280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:47.664 [2024-07-10 23:41:56.645092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:37:47.664 [2024-07-10 23:41:56.646641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.664 [2024-07-10 23:41:56.646667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:47.664 [2024-07-10 23:41:56.656232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:37:47.664 [2024-07-10 23:41:56.657962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.664 [2024-07-10 23:41:56.657986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:47.664 [2024-07-10 23:41:56.666104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:37:47.664 [2024-07-10 23:41:56.667272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.664 [2024-07-10 23:41:56.667298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:47.664 [2024-07-10 23:41:56.675861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:37:47.664 [2024-07-10 23:41:56.677169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.664 [2024-07-10 23:41:56.677194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:47.664 [2024-07-10 23:41:56.687012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3060 00:37:47.664 [2024-07-10 23:41:56.688418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.664 [2024-07-10 23:41:56.688443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:47.664 [2024-07-10 23:41:56.696903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1f80 00:37:47.664 [2024-07-10 23:41:56.697813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.664 [2024-07-10 23:41:56.697837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:47.664 [2024-07-10 23:41:56.707896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef270 00:37:47.664 [2024-07-10 23:41:56.708660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.664 [2024-07-10 23:41:56.708685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:47.664 [2024-07-10 23:41:56.719170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:37:47.664 [2024-07-10 23:41:56.720037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.664 [2024-07-10 23:41:56.720061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:47.664 [2024-07-10 23:41:56.729079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:37:47.923 [2024-07-10 23:41:56.730778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.923 [2024-07-10 23:41:56.730804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:47.923 [2024-07-10 23:41:56.739207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0630 00:37:47.923 [2024-07-10 23:41:56.739979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.923 [2024-07-10 23:41:56.740004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:47.923 [2024-07-10 23:41:56.749840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:37:47.923 [2024-07-10 23:41:56.750732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.923 [2024-07-10 23:41:56.750758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:47.923 [2024-07-10 23:41:56.761801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0bc0 00:37:47.923 [2024-07-10 23:41:56.763222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.923 [2024-07-10 23:41:56.763247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:47.923 [2024-07-10 23:41:56.771731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:37:47.923 [2024-07-10 23:41:56.772623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.923 [2024-07-10 23:41:56.772649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:47.923 [2024-07-10 23:41:56.782582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:37:47.923 [2024-07-10 23:41:56.783305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.923 [2024-07-10 23:41:56.783329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:47.923 [2024-07-10 23:41:56.793648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:37:47.923 [2024-07-10 23:41:56.794560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.923 [2024-07-10 23:41:56.794583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:47.923 [2024-07-10 23:41:56.803662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195efae0 00:37:47.923 [2024-07-10 23:41:56.805179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.923 [2024-07-10 23:41:56.805203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:47.923 [2024-07-10 23:41:56.813582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:37:47.923 [2024-07-10 23:41:56.814452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.923 [2024-07-10 23:41:56.814477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:47.923 [2024-07-10 23:41:56.824167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:37:47.923 [2024-07-10 23:41:56.824985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.923 [2024-07-10 23:41:56.825009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:47.923 [2024-07-10 23:41:56.834764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e27f0 00:37:47.923 [2024-07-10 23:41:56.835588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.923 [2024-07-10 23:41:56.835612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:47.923 [2024-07-10 23:41:56.845333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee190 00:37:47.923 [2024-07-10 23:41:56.846184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.923 [2024-07-10 23:41:56.846209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:47.923 [2024-07-10 23:41:56.855945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:37:47.923 [2024-07-10 23:41:56.856773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.923 [2024-07-10 23:41:56.856803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:47.923 [2024-07-10 23:41:56.866542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:37:47.923 [2024-07-10 23:41:56.867370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.923 [2024-07-10 23:41:56.867395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:47.923 [2024-07-10 23:41:56.877171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:37:47.923 [2024-07-10 23:41:56.878020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.923 [2024-07-10 23:41:56.878045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:47.923 [2024-07-10 23:41:56.887811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f81e0 00:37:47.923 [2024-07-10 23:41:56.888641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.923 [2024-07-10 23:41:56.888665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:47.923 00:37:47.923 Latency(us) 00:37:47.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.923 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:47.923 nvme0n1 : 2.00 23887.80 93.31 0.00 0.00 5350.45 2208.28 15386.71 00:37:47.923 =================================================================================================================== 00:37:47.923 Total : 23887.80 93.31 0.00 0.00 5350.45 2208.28 15386.71 00:37:47.923 0 00:37:47.923 23:41:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:47.923 23:41:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:47.923 23:41:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:47.923 | .driver_specific 00:37:47.923 | .nvme_error 00:37:47.923 | .status_code 00:37:47.923 | .command_transient_transport_error' 00:37:47.923 23:41:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:48.182 23:41:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 187 > 0 )) 00:37:48.182 23:41:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2660775 00:37:48.182 23:41:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2660775 ']' 00:37:48.182 23:41:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2660775 00:37:48.182 23:41:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:37:48.182 23:41:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:48.182 23:41:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2660775 00:37:48.182 23:41:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:48.182 23:41:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:48.182 23:41:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2660775' 00:37:48.182 killing process with pid 2660775 00:37:48.182 23:41:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2660775 00:37:48.182 Received shutdown signal, test time was about 2.000000 seconds 00:37:48.182 00:37:48.182 Latency(us) 00:37:48.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.182 =================================================================================================================== 00:37:48.182 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:48.182 23:41:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2660775 00:37:49.560 23:41:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:49.560 23:41:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:49.560 23:41:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:49.560 23:41:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:49.560 23:41:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:49.560 23:41:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2661597 00:37:49.560 23:41:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2661597 /var/tmp/bperf.sock 00:37:49.560 23:41:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:49.560 23:41:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2661597 ']' 00:37:49.560 23:41:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:49.560 23:41:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:49.560 23:41:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:49.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:49.560 23:41:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:49.560 23:41:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:49.560 [2024-07-10 23:41:58.268018] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:37:49.560 [2024-07-10 23:41:58.268114] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2661597 ] 00:37:49.560 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:49.560 Zero copy mechanism will not be used. 00:37:49.560 EAL: No free 2048 kB hugepages reported on node 1 00:37:49.560 [2024-07-10 23:41:58.372805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:49.560 [2024-07-10 23:41:58.592464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:50.128 23:41:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:50.128 23:41:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:37:50.128 23:41:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:50.128 23:41:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:50.386 23:41:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:50.386 23:41:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.386 23:41:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:50.386 23:41:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.386 23:41:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:50.386 23:41:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:50.646 nvme0n1 00:37:50.646 23:41:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:50.646 23:41:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.646 23:41:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:50.646 23:41:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.646 23:41:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:50.646 23:41:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:50.646 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:50.646 Zero copy mechanism will not be used. 00:37:50.646 Running I/O for 2 seconds... 00:37:50.646 [2024-07-10 23:41:59.612318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.646 [2024-07-10 23:41:59.612795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.646 [2024-07-10 23:41:59.612834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:50.646 [2024-07-10 23:41:59.620680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.646 [2024-07-10 23:41:59.621143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.646 [2024-07-10 23:41:59.621180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:50.646 [2024-07-10 23:41:59.627873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.646 [2024-07-10 23:41:59.628301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.646 [2024-07-10 23:41:59.628329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:50.646 [2024-07-10 23:41:59.633815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.646 [2024-07-10 23:41:59.634266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.646 [2024-07-10 23:41:59.634294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:50.646 [2024-07-10 23:41:59.640305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.646 [2024-07-10 23:41:59.640726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.646 [2024-07-10 23:41:59.640752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:50.646 [2024-07-10 23:41:59.646023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.646 [2024-07-10 23:41:59.646466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.646 [2024-07-10 23:41:59.646492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:50.646 [2024-07-10 23:41:59.651664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.646 [2024-07-10 23:41:59.652087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.646 [2024-07-10 23:41:59.652114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:50.646 [2024-07-10 23:41:59.656978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.646 [2024-07-10 23:41:59.657425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.646 [2024-07-10 23:41:59.657450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:50.646 [2024-07-10 23:41:59.662512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.646 [2024-07-10 23:41:59.662961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.646 [2024-07-10 23:41:59.662987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:50.646 [2024-07-10 23:41:59.668472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.646 [2024-07-10 23:41:59.668893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.646 [2024-07-10 23:41:59.668919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:50.646 [2024-07-10 23:41:59.673953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.647 [2024-07-10 23:41:59.674396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.647 [2024-07-10 23:41:59.674432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:50.647 [2024-07-10 23:41:59.679877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.647 [2024-07-10 23:41:59.680301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.647 [2024-07-10 23:41:59.680327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:50.647 [2024-07-10 23:41:59.685701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.647 [2024-07-10 23:41:59.686114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.647 [2024-07-10 23:41:59.686138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:50.647 [2024-07-10 23:41:59.692120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.647 [2024-07-10 23:41:59.692597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.647 [2024-07-10 23:41:59.692622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:50.647 [2024-07-10 23:41:59.699865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.647 [2024-07-10 23:41:59.700331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.647 [2024-07-10 23:41:59.700363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:50.647 [2024-07-10 23:41:59.706763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.647 [2024-07-10 23:41:59.707216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.647 [2024-07-10 23:41:59.707241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.713126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.713591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.713618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.719716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.720130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.720156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.725534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.725987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.726013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.731194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.731620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.731646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.737452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.737877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.737902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.744171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.744603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.744628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.750645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.751059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.751084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.757066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.757522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.757547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.762699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.763112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.763139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.768584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.769011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.769037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.774225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.774679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.774704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.780312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.780770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.780797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.785877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.786325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.786350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.792967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.793441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.793467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.800513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.800977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.801003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.806786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.807220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.807249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.813184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.813614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.813639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.819468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.819899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.819925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.826298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.826727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.826752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.833981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.834439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.834465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.840961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.841061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.841086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.848591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.849025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.849050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.856133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.856579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.856604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.863957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.864432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.864458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.872365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.872836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.872862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.879715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.880175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.907 [2024-07-10 23:41:59.880202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:50.907 [2024-07-10 23:41:59.887135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.907 [2024-07-10 23:41:59.887240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.908 [2024-07-10 23:41:59.887265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:50.908 [2024-07-10 23:41:59.894925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.908 [2024-07-10 23:41:59.895360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.908 [2024-07-10 23:41:59.895386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:50.908 [2024-07-10 23:41:59.903087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.908 [2024-07-10 23:41:59.903191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.908 [2024-07-10 23:41:59.903215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:50.908 [2024-07-10 23:41:59.909942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.908 [2024-07-10 23:41:59.910363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.908 [2024-07-10 23:41:59.910388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:50.908 [2024-07-10 23:41:59.916273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.908 [2024-07-10 23:41:59.916704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.908 [2024-07-10 23:41:59.916729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:50.908 [2024-07-10 23:41:59.922290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.908 [2024-07-10 23:41:59.922737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.908 [2024-07-10 23:41:59.922762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:50.908 [2024-07-10 23:41:59.928243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.908 [2024-07-10 23:41:59.928653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.908 [2024-07-10 23:41:59.928683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:50.908 [2024-07-10 23:41:59.934103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.908 [2024-07-10 23:41:59.934557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.908 [2024-07-10 23:41:59.934582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:50.908 [2024-07-10 23:41:59.939881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.908 [2024-07-10 23:41:59.940307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.908 [2024-07-10 23:41:59.940332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:50.908 [2024-07-10 23:41:59.946881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.908 [2024-07-10 23:41:59.947049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.908 [2024-07-10 23:41:59.947073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:50.908 [2024-07-10 23:41:59.955400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.908 [2024-07-10 23:41:59.955834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.908 [2024-07-10 23:41:59.955858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:50.908 [2024-07-10 23:41:59.962310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.908 [2024-07-10 23:41:59.962732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.908 [2024-07-10 23:41:59.962758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:50.908 [2024-07-10 23:41:59.968389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:50.908 [2024-07-10 23:41:59.968843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.908 [2024-07-10 23:41:59.968868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:41:59.974674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:41:59.975106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:41:59.975133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:41:59.980573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:41:59.981019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:41:59.981045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:41:59.986603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:41:59.987037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:41:59.987063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:41:59.993156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:41:59.993626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:41:59.993652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.002716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.003195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.003222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.009868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.010343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.010369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.016531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.016955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.016982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.023590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.024046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.024073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.031110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.031536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.031565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.037976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.038421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.038448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.044713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.045141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.045178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.050787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.051225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.051251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.056964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.057435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.057460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.063363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.063776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.063802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.069555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.069990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.070016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.075653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.076080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.076106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.081590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.082040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.082066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.087604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.088028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.088055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.095304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.095725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.095750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.104275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.104726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.104752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.113608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.114042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.114068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.123284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.123734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.123760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.132601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.133039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.168 [2024-07-10 23:42:00.133065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.168 [2024-07-10 23:42:00.140104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.168 [2024-07-10 23:42:00.140534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.169 [2024-07-10 23:42:00.140560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.169 [2024-07-10 23:42:00.147592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.169 [2024-07-10 23:42:00.148053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.169 [2024-07-10 23:42:00.148078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.169 [2024-07-10 23:42:00.155573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.169 [2024-07-10 23:42:00.156003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.169 [2024-07-10 23:42:00.156029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.169 [2024-07-10 23:42:00.162884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.169 [2024-07-10 23:42:00.162971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.169 [2024-07-10 23:42:00.162996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.169 [2024-07-10 23:42:00.170330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.169 [2024-07-10 23:42:00.170757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.169 [2024-07-10 23:42:00.170782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.169 [2024-07-10 23:42:00.177460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.169 [2024-07-10 23:42:00.177872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.169 [2024-07-10 23:42:00.177897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.169 [2024-07-10 23:42:00.185257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.169 [2024-07-10 23:42:00.185749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.169 [2024-07-10 23:42:00.185776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.169 [2024-07-10 23:42:00.192270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.169 [2024-07-10 23:42:00.192729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.169 [2024-07-10 23:42:00.192755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.169 [2024-07-10 23:42:00.199741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.169 [2024-07-10 23:42:00.200166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.169 [2024-07-10 23:42:00.200208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.169 [2024-07-10 23:42:00.207392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.169 [2024-07-10 23:42:00.207862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.169 [2024-07-10 23:42:00.207887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.169 [2024-07-10 23:42:00.216116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.169 [2024-07-10 23:42:00.216571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.169 [2024-07-10 23:42:00.216597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.169 [2024-07-10 23:42:00.223837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.169 [2024-07-10 23:42:00.224318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.169 [2024-07-10 23:42:00.224344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.169 [2024-07-10 23:42:00.231666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.169 [2024-07-10 23:42:00.232091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.169 [2024-07-10 23:42:00.232117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.429 [2024-07-10 23:42:00.239070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.429 [2024-07-10 23:42:00.239515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.429 [2024-07-10 23:42:00.239540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.429 [2024-07-10 23:42:00.246368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.429 [2024-07-10 23:42:00.246829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.429 [2024-07-10 23:42:00.246854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.429 [2024-07-10 23:42:00.253620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.429 [2024-07-10 23:42:00.254071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.429 [2024-07-10 23:42:00.254096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.429 [2024-07-10 23:42:00.261019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.429 [2024-07-10 23:42:00.261488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.429 [2024-07-10 23:42:00.261513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.429 [2024-07-10 23:42:00.268615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.429 [2024-07-10 23:42:00.269059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.429 [2024-07-10 23:42:00.269085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.429 [2024-07-10 23:42:00.275766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.429 [2024-07-10 23:42:00.276209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.429 [2024-07-10 23:42:00.276234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.429 [2024-07-10 23:42:00.282355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.429 [2024-07-10 23:42:00.282793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.429 [2024-07-10 23:42:00.282818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.429 [2024-07-10 23:42:00.288646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.429 [2024-07-10 23:42:00.289069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.429 [2024-07-10 23:42:00.289094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.429 [2024-07-10 23:42:00.294815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.429 [2024-07-10 23:42:00.295243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.429 [2024-07-10 23:42:00.295269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.429 [2024-07-10 23:42:00.301156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.429 [2024-07-10 23:42:00.301434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.429 [2024-07-10 23:42:00.301460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.429 [2024-07-10 23:42:00.307250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.429 [2024-07-10 23:42:00.307651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.429 [2024-07-10 23:42:00.307676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.429 [2024-07-10 23:42:00.313141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.429 [2024-07-10 23:42:00.313566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.429 [2024-07-10 23:42:00.313593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.318563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.318963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.318988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.324040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.324445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.324470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.329633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.330029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.330055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.335007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.335416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.335442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.340365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.340767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.340792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.345561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.345958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.345988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.350713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.351119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.351145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.355808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.356205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.356231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.360952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.361355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.361380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.366076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.366486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.366513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.371267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.371673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.371699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.376460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.376843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.376867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.381667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.382075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.382100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.386784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.387168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.387194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.391861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.392233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.392259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.397148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.397538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.397564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.402422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.402802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.402828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.407586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.407957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.407983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.412597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.412968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.412994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.417597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.417974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.417999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.422812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.423189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.423214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.429414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.429835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.429860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.436801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.437250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.437283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.445080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.445524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.445550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.453516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.453996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.454021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.462124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.462592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.462617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.470793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.471277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.471302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.480077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.480552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.430 [2024-07-10 23:42:00.480577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.430 [2024-07-10 23:42:00.488830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.430 [2024-07-10 23:42:00.489363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.431 [2024-07-10 23:42:00.489389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.690 [2024-07-10 23:42:00.497497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.690 [2024-07-10 23:42:00.498017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.690 [2024-07-10 23:42:00.498043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.690 [2024-07-10 23:42:00.506235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.690 [2024-07-10 23:42:00.506718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.690 [2024-07-10 23:42:00.506743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.690 [2024-07-10 23:42:00.514956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.690 [2024-07-10 23:42:00.515481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.690 [2024-07-10 23:42:00.515505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.690 [2024-07-10 23:42:00.523334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.690 [2024-07-10 23:42:00.523717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.523743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.530082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.530463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.530489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.535510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.535884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.535910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.540988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.541377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.541402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.546214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.546584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.546609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.551433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.551810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.551835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.556704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.557085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.557111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.561983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.562375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.562405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.567202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.567572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.567596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.572355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.572728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.572754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.577540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.577921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.577946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.582768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.583152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.583184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.587992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.588381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.588406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.593168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.593556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.593581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.598354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.598744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.598770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.604239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.604606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.604631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.609410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.609782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.609807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.614622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.615006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.615034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.620058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.620433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.620459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.625769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.626148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.626180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.632382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.632849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.632874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.639968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.640442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.640468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.647859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.648338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.691 [2024-07-10 23:42:00.648363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.691 [2024-07-10 23:42:00.654110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.691 [2024-07-10 23:42:00.654510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.654536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.662227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.662777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.662806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.671617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.672122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.672149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.677548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.677930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.677956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.683215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.683597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.683623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.688497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.688867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.688893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.693735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.694120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.694146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.699023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.699411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.699438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.704451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.704830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.704855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.709592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.709976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.710002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.714753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.715113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.715139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.719767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.720117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.720142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.724576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.724914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.724939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.729376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.729758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.729784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.734915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.735292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.735318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.740120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.740479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.740505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.745363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.745707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.745732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.750662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.751021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.751047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.692 [2024-07-10 23:42:00.755924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.692 [2024-07-10 23:42:00.756256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.692 [2024-07-10 23:42:00.756285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.761997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.762281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.762307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.767386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.767665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.767690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.772184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.772439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.772464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.776885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.777147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.777178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.781587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.781860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.781885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.787361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.787635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.787660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.793419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.793831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.793856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.801265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.801564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.801591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.808512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.808884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.808910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.815627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.815920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.815946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.821122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.821397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.821424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.825832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.826098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.826122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.830358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.830628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.830653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.834909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.835173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.835199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.839406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.839662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.839688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.843864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.844116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.844143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.848337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.848591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.848616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.852802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.853057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.853082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.857300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.857552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.952 [2024-07-10 23:42:00.857577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.952 [2024-07-10 23:42:00.861789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.952 [2024-07-10 23:42:00.862048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.862074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.866244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.866500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.866526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.870721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.870967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.870993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.875187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.875451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.875476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.879620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.879880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.879905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.884093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.884358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.884384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.888510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.888769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.888794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.893258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.893500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.893525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.899055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.899321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.899347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.904228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.904506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.904531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.908975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.909241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.909267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.914242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.914533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.914559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.919863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.920145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.920177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.926296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.926670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.926695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.933450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.933707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.933732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.940605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.940987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.941013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.948508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.948824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.948850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.956871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.957181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.957207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.964889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.965266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.965291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.973329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.973631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.973656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.981136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.981518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.981543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.989214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.989572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.989598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:00.997303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:00.997632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:00.997657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:01.005097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:01.005366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:01.005395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:51.953 [2024-07-10 23:42:01.013400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:51.953 [2024-07-10 23:42:01.013766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.953 [2024-07-10 23:42:01.013792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.213 [2024-07-10 23:42:01.021424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.213 [2024-07-10 23:42:01.021782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.213 [2024-07-10 23:42:01.021808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.213 [2024-07-10 23:42:01.029753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.213 [2024-07-10 23:42:01.030100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.213 [2024-07-10 23:42:01.030126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.213 [2024-07-10 23:42:01.038286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.213 [2024-07-10 23:42:01.038625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.213 [2024-07-10 23:42:01.038650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.213 [2024-07-10 23:42:01.045894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.213 [2024-07-10 23:42:01.046265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.213 [2024-07-10 23:42:01.046291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.213 [2024-07-10 23:42:01.053577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.213 [2024-07-10 23:42:01.053931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.213 [2024-07-10 23:42:01.053957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.213 [2024-07-10 23:42:01.061921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.213 [2024-07-10 23:42:01.062358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.213 [2024-07-10 23:42:01.062383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.213 [2024-07-10 23:42:01.069736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.213 [2024-07-10 23:42:01.069986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.213 [2024-07-10 23:42:01.070011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.213 [2024-07-10 23:42:01.077778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.213 [2024-07-10 23:42:01.078127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.213 [2024-07-10 23:42:01.078151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.213 [2024-07-10 23:42:01.086039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.213 [2024-07-10 23:42:01.086414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.213 [2024-07-10 23:42:01.086440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.213 [2024-07-10 23:42:01.093926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.213 [2024-07-10 23:42:01.094327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.213 [2024-07-10 23:42:01.094352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.213 [2024-07-10 23:42:01.101570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.213 [2024-07-10 23:42:01.101906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.213 [2024-07-10 23:42:01.101931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.109051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.109424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.109450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.117344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.117690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.117715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.125217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.125594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.125620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.133529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.133881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.133906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.141666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.142057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.142086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.148981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.149370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.149395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.156248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.156617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.156642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.163907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.164222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.164247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.169608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.169903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.169929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.174541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.174826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.174852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.179799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.180104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.180130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.184822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.185128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.185153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.190022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.190315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.190340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.196131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.196390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.196415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.201348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.201652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.201677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.208600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.208970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.208995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.213991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.214306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.214331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.219076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.219339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.219365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.223906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.224131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.224157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.228937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.229245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.229271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.234729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.235007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.235031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.240213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.240495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.240524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.245105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.245375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.245400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.250018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.250283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.250308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.254842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.255103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.255128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.259547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.259758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.259783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.264213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.264509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.264534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.269664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.269966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.214 [2024-07-10 23:42:01.269991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.214 [2024-07-10 23:42:01.274743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.214 [2024-07-10 23:42:01.275000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.215 [2024-07-10 23:42:01.275025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.215 [2024-07-10 23:42:01.279419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.474 [2024-07-10 23:42:01.279654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.474 [2024-07-10 23:42:01.279681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.474 [2024-07-10 23:42:01.283978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.474 [2024-07-10 23:42:01.284206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.474 [2024-07-10 23:42:01.284232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.474 [2024-07-10 23:42:01.289117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.474 [2024-07-10 23:42:01.289405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.474 [2024-07-10 23:42:01.289430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.474 [2024-07-10 23:42:01.294779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.474 [2024-07-10 23:42:01.295020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.474 [2024-07-10 23:42:01.295045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.474 [2024-07-10 23:42:01.300310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.474 [2024-07-10 23:42:01.300589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.474 [2024-07-10 23:42:01.300614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.474 [2024-07-10 23:42:01.306154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.474 [2024-07-10 23:42:01.306450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.474 [2024-07-10 23:42:01.306475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.474 [2024-07-10 23:42:01.312090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.474 [2024-07-10 23:42:01.312314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.474 [2024-07-10 23:42:01.312339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.474 [2024-07-10 23:42:01.316922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.474 [2024-07-10 23:42:01.317175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.474 [2024-07-10 23:42:01.317201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.474 [2024-07-10 23:42:01.321649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.474 [2024-07-10 23:42:01.321912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.474 [2024-07-10 23:42:01.321937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.474 [2024-07-10 23:42:01.326767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.474 [2024-07-10 23:42:01.327038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.474 [2024-07-10 23:42:01.327067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.331642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.331893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.331918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.336480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.336697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.336722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.342113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.342333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.342357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.347890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.348173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.348198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.355246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.355485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.355510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.362627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.362934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.362960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.370176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.370384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.370409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.378309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.378646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.378671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.386150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.386474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.386499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.394511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.394807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.394833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.402373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.402664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.402690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.410435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.410657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.410682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.418998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.419385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.419410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.426224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.426438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.426463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.432409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.432661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.432687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.438268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.438544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.438569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.444103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.444313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.444339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.449999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.450208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.450233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.456041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.456251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.456276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.460702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.460936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.460961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.465428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.465650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.465675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.470133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.470371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.470396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.474741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.474944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.474970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.479354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.479575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.479600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.483999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.484225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.484250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.488619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.488829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.488854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.493216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.493440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.493464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.497905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.498111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.498135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.502644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.502852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.502876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.507248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.507464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.507489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.511853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.512085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.512110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.516492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.516713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.516737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.521178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.521412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.521437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.525789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.526021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.526046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.530343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.530552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.530577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.534949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.535178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.535203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.475 [2024-07-10 23:42:01.539593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.475 [2024-07-10 23:42:01.539813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.475 [2024-07-10 23:42:01.539839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.734 [2024-07-10 23:42:01.544129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.734 [2024-07-10 23:42:01.544359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.734 [2024-07-10 23:42:01.544384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.734 [2024-07-10 23:42:01.548822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.734 [2024-07-10 23:42:01.549046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.734 [2024-07-10 23:42:01.549071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.734 [2024-07-10 23:42:01.553424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.734 [2024-07-10 23:42:01.553628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.734 [2024-07-10 23:42:01.553653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.734 [2024-07-10 23:42:01.558449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.734 [2024-07-10 23:42:01.558652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.734 [2024-07-10 23:42:01.558676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.734 [2024-07-10 23:42:01.564026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.734 [2024-07-10 23:42:01.564233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.734 [2024-07-10 23:42:01.564258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.734 [2024-07-10 23:42:01.568715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.734 [2024-07-10 23:42:01.568934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.734 [2024-07-10 23:42:01.568963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.734 [2024-07-10 23:42:01.573403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.734 [2024-07-10 23:42:01.573631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.734 [2024-07-10 23:42:01.573656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.734 [2024-07-10 23:42:01.578032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.734 [2024-07-10 23:42:01.578261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.734 [2024-07-10 23:42:01.578286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.734 [2024-07-10 23:42:01.582551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.734 [2024-07-10 23:42:01.582780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.734 [2024-07-10 23:42:01.582806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.734 [2024-07-10 23:42:01.587207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.734 [2024-07-10 23:42:01.587427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.734 [2024-07-10 23:42:01.587452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:52.734 [2024-07-10 23:42:01.591872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.734 [2024-07-10 23:42:01.592082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.734 [2024-07-10 23:42:01.592107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:52.734 [2024-07-10 23:42:01.596484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.735 [2024-07-10 23:42:01.596704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.735 [2024-07-10 23:42:01.596729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.735 [2024-07-10 23:42:01.601065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:52.735 [2024-07-10 23:42:01.601285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.735 [2024-07-10 23:42:01.601310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:52.735 00:37:52.735 Latency(us) 00:37:52.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:52.735 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:52.735 nvme0n1 : 2.00 4980.77 622.60 0.00 0.00 3207.10 2065.81 11625.52 00:37:52.735 =================================================================================================================== 00:37:52.735 Total : 4980.77 622.60 0.00 0.00 3207.10 2065.81 11625.52 00:37:52.735 0 00:37:52.735 23:42:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:52.735 23:42:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:52.735 23:42:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:52.735 | .driver_specific 00:37:52.735 | .nvme_error 00:37:52.735 | .status_code 00:37:52.735 | .command_transient_transport_error' 00:37:52.735 23:42:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:52.993 23:42:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 321 > 0 )) 00:37:52.993 23:42:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2661597 00:37:52.993 23:42:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2661597 ']' 00:37:52.993 23:42:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2661597 00:37:52.993 23:42:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:37:52.993 23:42:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:52.993 23:42:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2661597 00:37:52.993 23:42:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:52.993 23:42:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:52.993 23:42:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2661597' 00:37:52.993 killing process with pid 2661597 00:37:52.993 23:42:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2661597 00:37:52.993 Received shutdown signal, test time was about 2.000000 seconds 00:37:52.993 00:37:52.993 Latency(us) 00:37:52.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:52.994 =================================================================================================================== 00:37:52.994 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:52.994 23:42:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2661597 00:37:53.929 23:42:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2659019 00:37:53.929 23:42:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2659019 ']' 00:37:53.929 23:42:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2659019 00:37:53.929 23:42:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:37:53.929 23:42:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:53.929 23:42:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2659019 00:37:53.929 23:42:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:53.929 23:42:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:53.929 23:42:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2659019' 00:37:53.929 killing process with pid 2659019 00:37:53.929 23:42:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2659019 00:37:53.929 23:42:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2659019 00:37:55.305 00:37:55.305 real 0m21.800s 00:37:55.305 user 0m40.553s 00:37:55.305 sys 0m4.672s 00:37:55.305 23:42:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:55.305 23:42:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:55.305 ************************************ 00:37:55.305 END TEST nvmf_digest_error 00:37:55.305 ************************************ 00:37:55.305 23:42:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:37:55.305 23:42:04 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:55.305 23:42:04 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:55.305 23:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:55.305 23:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:37:55.305 23:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:55.305 23:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:37:55.305 23:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:55.305 23:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:55.305 rmmod nvme_tcp 00:37:55.305 rmmod nvme_fabrics 00:37:55.305 rmmod nvme_keyring 00:37:55.565 23:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:55.565 23:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:37:55.565 23:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:37:55.565 23:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2659019 ']' 00:37:55.565 23:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2659019 00:37:55.565 23:42:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2659019 ']' 00:37:55.565 23:42:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2659019 00:37:55.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2659019) - No such process 00:37:55.565 23:42:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2659019 is not found' 00:37:55.565 Process with pid 2659019 is not found 00:37:55.565 23:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:55.565 23:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:55.565 23:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:55.565 23:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:55.565 23:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:55.565 23:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:55.565 23:42:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:55.565 23:42:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:57.469 23:42:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:57.469 00:37:57.469 real 0m52.685s 00:37:57.469 user 1m25.186s 00:37:57.469 sys 0m13.591s 00:37:57.469 23:42:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:57.469 23:42:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:57.469 ************************************ 00:37:57.469 END TEST nvmf_digest 00:37:57.469 ************************************ 00:37:57.469 23:42:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:57.469 23:42:06 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:37:57.469 23:42:06 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:37:57.469 23:42:06 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:37:57.469 23:42:06 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:57.469 23:42:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:57.469 23:42:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:57.469 23:42:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:57.469 ************************************ 00:37:57.469 START TEST nvmf_bdevperf 00:37:57.469 ************************************ 00:37:57.469 23:42:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:57.731 * Looking for test storage... 00:37:57.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:37:57.731 23:42:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:03.106 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:03.107 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:03.107 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:03.107 Found net devices under 0000:86:00.0: cvl_0_0 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:03.107 Found net devices under 0000:86:00.1: cvl_0_1 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:03.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:03.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:38:03.107 00:38:03.107 --- 10.0.0.2 ping statistics --- 00:38:03.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:03.107 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:03.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:03.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:38:03.107 00:38:03.107 --- 10.0.0.1 ping statistics --- 00:38:03.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:03.107 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2666346 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2666346 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2666346 ']' 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:03.107 23:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:03.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:03.108 23:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:03.108 23:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.108 [2024-07-10 23:42:12.013240] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:38:03.108 [2024-07-10 23:42:12.013326] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:03.108 EAL: No free 2048 kB hugepages reported on node 1 00:38:03.108 [2024-07-10 23:42:12.122420] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:03.366 [2024-07-10 23:42:12.338088] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:03.367 [2024-07-10 23:42:12.338133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:03.367 [2024-07-10 23:42:12.338148] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:03.367 [2024-07-10 23:42:12.338157] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:03.367 [2024-07-10 23:42:12.338171] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:03.367 [2024-07-10 23:42:12.338243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:03.367 [2024-07-10 23:42:12.338488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:03.367 [2024-07-10 23:42:12.338497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.935 [2024-07-10 23:42:12.837329] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.935 Malloc0 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.935 [2024-07-10 23:42:12.971129] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:03.935 { 00:38:03.935 "params": { 00:38:03.935 "name": "Nvme$subsystem", 00:38:03.935 "trtype": "$TEST_TRANSPORT", 00:38:03.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:03.935 "adrfam": "ipv4", 00:38:03.935 "trsvcid": "$NVMF_PORT", 00:38:03.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:03.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:03.935 "hdgst": ${hdgst:-false}, 00:38:03.935 "ddgst": ${ddgst:-false} 00:38:03.935 }, 00:38:03.935 "method": "bdev_nvme_attach_controller" 00:38:03.935 } 00:38:03.935 EOF 00:38:03.935 )") 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:38:03.935 23:42:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:03.935 "params": { 00:38:03.935 "name": "Nvme1", 00:38:03.935 "trtype": "tcp", 00:38:03.935 "traddr": "10.0.0.2", 00:38:03.935 "adrfam": "ipv4", 00:38:03.935 "trsvcid": "4420", 00:38:03.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:03.935 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:03.935 "hdgst": false, 00:38:03.935 "ddgst": false 00:38:03.935 }, 00:38:03.935 "method": "bdev_nvme_attach_controller" 00:38:03.935 }' 00:38:04.194 [2024-07-10 23:42:13.033992] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:38:04.194 [2024-07-10 23:42:13.034082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666595 ] 00:38:04.194 EAL: No free 2048 kB hugepages reported on node 1 00:38:04.194 [2024-07-10 23:42:13.133408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:04.452 [2024-07-10 23:42:13.368454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:05.016 Running I/O for 1 seconds... 00:38:05.952 00:38:05.952 Latency(us) 00:38:05.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:05.952 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:05.952 Verification LBA range: start 0x0 length 0x4000 00:38:05.952 Nvme1n1 : 1.01 9435.47 36.86 0.00 0.00 13491.96 2735.42 12252.38 00:38:05.952 =================================================================================================================== 00:38:05.952 Total : 9435.47 36.86 0.00 0.00 13491.96 2735.42 12252.38 00:38:06.889 23:42:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2667057 00:38:06.889 23:42:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:38:06.889 23:42:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:38:06.889 23:42:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:38:06.889 23:42:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:38:06.889 23:42:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:38:06.889 23:42:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:06.889 23:42:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:06.889 { 00:38:06.889 "params": { 00:38:06.889 "name": "Nvme$subsystem", 00:38:06.889 "trtype": "$TEST_TRANSPORT", 00:38:06.889 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:06.889 "adrfam": "ipv4", 00:38:06.889 "trsvcid": "$NVMF_PORT", 00:38:06.889 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:06.889 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:06.889 "hdgst": ${hdgst:-false}, 00:38:06.889 "ddgst": ${ddgst:-false} 00:38:06.889 }, 00:38:06.889 "method": "bdev_nvme_attach_controller" 00:38:06.889 } 00:38:06.889 EOF 00:38:06.889 )") 00:38:06.889 23:42:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:38:06.889 23:42:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:38:06.889 23:42:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:38:06.889 23:42:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:06.889 "params": { 00:38:06.889 "name": "Nvme1", 00:38:06.889 "trtype": "tcp", 00:38:06.889 "traddr": "10.0.0.2", 00:38:06.889 "adrfam": "ipv4", 00:38:06.889 "trsvcid": "4420", 00:38:06.889 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:06.889 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:06.889 "hdgst": false, 00:38:06.889 "ddgst": false 00:38:06.889 }, 00:38:06.889 "method": "bdev_nvme_attach_controller" 00:38:06.889 }' 00:38:07.147 [2024-07-10 23:42:15.986752] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:38:07.147 [2024-07-10 23:42:15.986842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667057 ] 00:38:07.147 EAL: No free 2048 kB hugepages reported on node 1 00:38:07.147 [2024-07-10 23:42:16.090747] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.404 [2024-07-10 23:42:16.325720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:07.971 Running I/O for 15 seconds... 00:38:09.872 23:42:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2666346 00:38:09.872 23:42:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:38:10.134 [2024-07-10 23:42:18.942294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.134 [2024-07-10 23:42:18.942947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.942988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.942999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.943008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.943019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.943028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.943039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.943049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.943059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.943070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.943081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.943090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.943102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.943111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.943122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.943132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.943143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.943153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.943171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.943181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.943192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.943201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.943212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.134 [2024-07-10 23:42:18.943221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.134 [2024-07-10 23:42:18.943233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.135 [2024-07-10 23:42:18.943535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.135 [2024-07-10 23:42:18.943556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.135 [2024-07-10 23:42:18.943576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.135 [2024-07-10 23:42:18.943598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.135 [2024-07-10 23:42:18.943619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.135 [2024-07-10 23:42:18.943640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.135 [2024-07-10 23:42:18.943660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.135 [2024-07-10 23:42:18.943680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.135 [2024-07-10 23:42:18.943701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.135 [2024-07-10 23:42:18.943722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.135 [2024-07-10 23:42:18.943742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.135 [2024-07-10 23:42:18.943764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.135 [2024-07-10 23:42:18.943785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.135 [2024-07-10 23:42:18.943805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.135 [2024-07-10 23:42:18.943827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.135 [2024-07-10 23:42:18.943847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.943981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.943992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.944001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.944012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.944021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.944032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.944041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.944052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.944062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.944073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.944082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.135 [2024-07-10 23:42:18.944094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.135 [2024-07-10 23:42:18.944104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.136 [2024-07-10 23:42:18.944124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.136 [2024-07-10 23:42:18.944150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.136 [2024-07-10 23:42:18.944175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.136 [2024-07-10 23:42:18.944195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.136 [2024-07-10 23:42:18.944216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.136 [2024-07-10 23:42:18.944236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.136 [2024-07-10 23:42:18.944256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.136 [2024-07-10 23:42:18.944276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.136 [2024-07-10 23:42:18.944296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.136 [2024-07-10 23:42:18.944317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.136 [2024-07-10 23:42:18.944337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.136 [2024-07-10 23:42:18.944357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.136 [2024-07-10 23:42:18.944377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.136 [2024-07-10 23:42:18.944398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:10.136 [2024-07-10 23:42:18.944418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.136 [2024-07-10 23:42:18.944983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.136 [2024-07-10 23:42:18.944992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.137 [2024-07-10 23:42:18.945003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.137 [2024-07-10 23:42:18.945011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.137 [2024-07-10 23:42:18.945022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.137 [2024-07-10 23:42:18.945032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.137 [2024-07-10 23:42:18.945043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.137 [2024-07-10 23:42:18.945052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.137 [2024-07-10 23:42:18.945062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032da00 is same with the state(5) to be set 00:38:10.137 [2024-07-10 23:42:18.945074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:10.137 [2024-07-10 23:42:18.945084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:10.137 [2024-07-10 23:42:18.945093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:8 PRP1 0x0 PRP2 0x0 00:38:10.137 [2024-07-10 23:42:18.945103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.137 [2024-07-10 23:42:18.945400] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500032da00 was disconnected and freed. reset controller. 00:38:10.137 [2024-07-10 23:42:18.945463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:10.137 [2024-07-10 23:42:18.945480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.137 [2024-07-10 23:42:18.945492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:10.137 [2024-07-10 23:42:18.945502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.137 [2024-07-10 23:42:18.945512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:10.137 [2024-07-10 23:42:18.945523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.137 [2024-07-10 23:42:18.945533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:10.137 [2024-07-10 23:42:18.945542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:10.137 [2024-07-10 23:42:18.945551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.137 [2024-07-10 23:42:18.948667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.137 [2024-07-10 23:42:18.948717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.137 [2024-07-10 23:42:18.949434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.137 [2024-07-10 23:42:18.949465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.137 [2024-07-10 23:42:18.949477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.137 [2024-07-10 23:42:18.949683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.137 [2024-07-10 23:42:18.949884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.137 [2024-07-10 23:42:18.949896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.137 [2024-07-10 23:42:18.949905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.137 [2024-07-10 23:42:18.953021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.137 [2024-07-10 23:42:18.962256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.137 [2024-07-10 23:42:18.962782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.137 [2024-07-10 23:42:18.962803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.137 [2024-07-10 23:42:18.962814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.137 [2024-07-10 23:42:18.963007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.137 [2024-07-10 23:42:18.963222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.137 [2024-07-10 23:42:18.963234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.137 [2024-07-10 23:42:18.963243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.137 [2024-07-10 23:42:18.966219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.137 [2024-07-10 23:42:18.975452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.137 [2024-07-10 23:42:18.975940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.137 [2024-07-10 23:42:18.975961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.137 [2024-07-10 23:42:18.975970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.137 [2024-07-10 23:42:18.976155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.137 [2024-07-10 23:42:18.976371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.137 [2024-07-10 23:42:18.976385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.137 [2024-07-10 23:42:18.976394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.137 [2024-07-10 23:42:18.979333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.137 [2024-07-10 23:42:18.988650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.137 [2024-07-10 23:42:18.989138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.137 [2024-07-10 23:42:18.989165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.137 [2024-07-10 23:42:18.989175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.137 [2024-07-10 23:42:18.989384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.137 [2024-07-10 23:42:18.989577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.137 [2024-07-10 23:42:18.989588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.137 [2024-07-10 23:42:18.989596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.137 [2024-07-10 23:42:18.992537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.137 [2024-07-10 23:42:19.001725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.137 [2024-07-10 23:42:19.002219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.137 [2024-07-10 23:42:19.002276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.137 [2024-07-10 23:42:19.002306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.137 [2024-07-10 23:42:19.002951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.137 [2024-07-10 23:42:19.003542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.137 [2024-07-10 23:42:19.003553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.137 [2024-07-10 23:42:19.003561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.137 [2024-07-10 23:42:19.006496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.137 [2024-07-10 23:42:19.014868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.137 [2024-07-10 23:42:19.015395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.137 [2024-07-10 23:42:19.015417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.137 [2024-07-10 23:42:19.015426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.137 [2024-07-10 23:42:19.015619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.137 [2024-07-10 23:42:19.015811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.137 [2024-07-10 23:42:19.015822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.137 [2024-07-10 23:42:19.015831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.137 [2024-07-10 23:42:19.018811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.137 [2024-07-10 23:42:19.028003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.137 [2024-07-10 23:42:19.028505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.137 [2024-07-10 23:42:19.028525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.137 [2024-07-10 23:42:19.028535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.137 [2024-07-10 23:42:19.028729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.137 [2024-07-10 23:42:19.028921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.137 [2024-07-10 23:42:19.028932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.137 [2024-07-10 23:42:19.028940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.137 [2024-07-10 23:42:19.031887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.137 [2024-07-10 23:42:19.041119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.137 [2024-07-10 23:42:19.041616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.137 [2024-07-10 23:42:19.041673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.137 [2024-07-10 23:42:19.041704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.137 [2024-07-10 23:42:19.042342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.137 [2024-07-10 23:42:19.042627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.137 [2024-07-10 23:42:19.042643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.137 [2024-07-10 23:42:19.042655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.137 [2024-07-10 23:42:19.047104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.138 [2024-07-10 23:42:19.054788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.138 [2024-07-10 23:42:19.055176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.138 [2024-07-10 23:42:19.055196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.138 [2024-07-10 23:42:19.055205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.138 [2024-07-10 23:42:19.055392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.138 [2024-07-10 23:42:19.055578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.138 [2024-07-10 23:42:19.055589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.138 [2024-07-10 23:42:19.055597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.138 [2024-07-10 23:42:19.058593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.138 [2024-07-10 23:42:19.067876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.138 [2024-07-10 23:42:19.068367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.138 [2024-07-10 23:42:19.068422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.138 [2024-07-10 23:42:19.068460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.138 [2024-07-10 23:42:19.068972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.138 [2024-07-10 23:42:19.069153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.138 [2024-07-10 23:42:19.069170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.138 [2024-07-10 23:42:19.069178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.138 [2024-07-10 23:42:19.072122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.138 [2024-07-10 23:42:19.081012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.138 [2024-07-10 23:42:19.081404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.138 [2024-07-10 23:42:19.081464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.138 [2024-07-10 23:42:19.081495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.138 [2024-07-10 23:42:19.082136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.138 [2024-07-10 23:42:19.082778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.138 [2024-07-10 23:42:19.082794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.138 [2024-07-10 23:42:19.082805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.138 [2024-07-10 23:42:19.087257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.138 [2024-07-10 23:42:19.094763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.138 [2024-07-10 23:42:19.095274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.138 [2024-07-10 23:42:19.095296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.138 [2024-07-10 23:42:19.095317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.138 [2024-07-10 23:42:19.095505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.138 [2024-07-10 23:42:19.095691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.138 [2024-07-10 23:42:19.095702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.138 [2024-07-10 23:42:19.095710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.138 [2024-07-10 23:42:19.098707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.138 [2024-07-10 23:42:19.107887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.138 [2024-07-10 23:42:19.108342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.138 [2024-07-10 23:42:19.108362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.138 [2024-07-10 23:42:19.108371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.138 [2024-07-10 23:42:19.108555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.138 [2024-07-10 23:42:19.108736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.138 [2024-07-10 23:42:19.108749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.138 [2024-07-10 23:42:19.108757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.138 [2024-07-10 23:42:19.111691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.138 [2024-07-10 23:42:19.121025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.138 [2024-07-10 23:42:19.121546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.138 [2024-07-10 23:42:19.121603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.138 [2024-07-10 23:42:19.121633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.138 [2024-07-10 23:42:19.122052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.138 [2024-07-10 23:42:19.122250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.138 [2024-07-10 23:42:19.122262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.138 [2024-07-10 23:42:19.122270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.138 [2024-07-10 23:42:19.125199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.139 [2024-07-10 23:42:19.134201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.139 [2024-07-10 23:42:19.134633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.139 [2024-07-10 23:42:19.134654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.139 [2024-07-10 23:42:19.134664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.139 [2024-07-10 23:42:19.134868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.139 [2024-07-10 23:42:19.135066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.139 [2024-07-10 23:42:19.135077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.139 [2024-07-10 23:42:19.135085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.139 [2024-07-10 23:42:19.138039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.139 [2024-07-10 23:42:19.147466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.139 [2024-07-10 23:42:19.147938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.139 [2024-07-10 23:42:19.147959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.139 [2024-07-10 23:42:19.147969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.139 [2024-07-10 23:42:19.148174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.139 [2024-07-10 23:42:19.148373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.139 [2024-07-10 23:42:19.148384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.139 [2024-07-10 23:42:19.148393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.139 [2024-07-10 23:42:19.151502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.139 [2024-07-10 23:42:19.160873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.139 [2024-07-10 23:42:19.161306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.139 [2024-07-10 23:42:19.161330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.139 [2024-07-10 23:42:19.161340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.139 [2024-07-10 23:42:19.161538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.139 [2024-07-10 23:42:19.161737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.139 [2024-07-10 23:42:19.161748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.139 [2024-07-10 23:42:19.161757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.139 [2024-07-10 23:42:19.164870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.139 [2024-07-10 23:42:19.174245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.139 [2024-07-10 23:42:19.174647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.139 [2024-07-10 23:42:19.174667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.139 [2024-07-10 23:42:19.174677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.139 [2024-07-10 23:42:19.174874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.139 [2024-07-10 23:42:19.175071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.139 [2024-07-10 23:42:19.175082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.139 [2024-07-10 23:42:19.175090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.139 [2024-07-10 23:42:19.178206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.139 [2024-07-10 23:42:19.187761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.139 [2024-07-10 23:42:19.188231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.139 [2024-07-10 23:42:19.188252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.139 [2024-07-10 23:42:19.188262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.139 [2024-07-10 23:42:19.188467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.139 [2024-07-10 23:42:19.188659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.139 [2024-07-10 23:42:19.188670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.139 [2024-07-10 23:42:19.188679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.139 [2024-07-10 23:42:19.191630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.399 [2024-07-10 23:42:19.201105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.399 [2024-07-10 23:42:19.201552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.399 [2024-07-10 23:42:19.201573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.399 [2024-07-10 23:42:19.201586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.399 [2024-07-10 23:42:19.201784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.399 [2024-07-10 23:42:19.201982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.399 [2024-07-10 23:42:19.201993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.399 [2024-07-10 23:42:19.202002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.399 [2024-07-10 23:42:19.205110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.399 [2024-07-10 23:42:19.214478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.399 [2024-07-10 23:42:19.214962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.399 [2024-07-10 23:42:19.214983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.399 [2024-07-10 23:42:19.214993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.399 [2024-07-10 23:42:19.215196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.399 [2024-07-10 23:42:19.215394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.399 [2024-07-10 23:42:19.215405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.399 [2024-07-10 23:42:19.215413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.399 [2024-07-10 23:42:19.218520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.399 [2024-07-10 23:42:19.227858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.399 [2024-07-10 23:42:19.228366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.399 [2024-07-10 23:42:19.228388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.399 [2024-07-10 23:42:19.228398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.399 [2024-07-10 23:42:19.228590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.399 [2024-07-10 23:42:19.228782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.399 [2024-07-10 23:42:19.228793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.399 [2024-07-10 23:42:19.228801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.399 [2024-07-10 23:42:19.231846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.399 [2024-07-10 23:42:19.240959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.399 [2024-07-10 23:42:19.241467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.399 [2024-07-10 23:42:19.241524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.399 [2024-07-10 23:42:19.241553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.399 [2024-07-10 23:42:19.242111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.399 [2024-07-10 23:42:19.242339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.399 [2024-07-10 23:42:19.242353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.399 [2024-07-10 23:42:19.242362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.399 [2024-07-10 23:42:19.245293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.399 [2024-07-10 23:42:19.254172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.399 [2024-07-10 23:42:19.254653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.399 [2024-07-10 23:42:19.254673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.399 [2024-07-10 23:42:19.254683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.399 [2024-07-10 23:42:19.254874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.399 [2024-07-10 23:42:19.255066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.399 [2024-07-10 23:42:19.255077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.399 [2024-07-10 23:42:19.255085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.399 [2024-07-10 23:42:19.258022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.399 [2024-07-10 23:42:19.267255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.399 [2024-07-10 23:42:19.267742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.399 [2024-07-10 23:42:19.267763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.399 [2024-07-10 23:42:19.267772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.399 [2024-07-10 23:42:19.267964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.399 [2024-07-10 23:42:19.268156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.399 [2024-07-10 23:42:19.268174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.399 [2024-07-10 23:42:19.268182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.399 [2024-07-10 23:42:19.271112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.399 [2024-07-10 23:42:19.280346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.399 [2024-07-10 23:42:19.280797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.399 [2024-07-10 23:42:19.280816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.399 [2024-07-10 23:42:19.280825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.399 [2024-07-10 23:42:19.281006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.399 [2024-07-10 23:42:19.281211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.399 [2024-07-10 23:42:19.281223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.399 [2024-07-10 23:42:19.281232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.399 [2024-07-10 23:42:19.284161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.399 [2024-07-10 23:42:19.293419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.399 [2024-07-10 23:42:19.293894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.399 [2024-07-10 23:42:19.293949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.399 [2024-07-10 23:42:19.293978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.399 [2024-07-10 23:42:19.294372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.399 [2024-07-10 23:42:19.294565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.399 [2024-07-10 23:42:19.294575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.399 [2024-07-10 23:42:19.294584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.399 [2024-07-10 23:42:19.297515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.399 [2024-07-10 23:42:19.306587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.399 [2024-07-10 23:42:19.307045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.399 [2024-07-10 23:42:19.307066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.399 [2024-07-10 23:42:19.307075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.399 [2024-07-10 23:42:19.307293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.399 [2024-07-10 23:42:19.307491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.399 [2024-07-10 23:42:19.307503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.399 [2024-07-10 23:42:19.307511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.399 [2024-07-10 23:42:19.310456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.399 [2024-07-10 23:42:19.319654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.399 [2024-07-10 23:42:19.320154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.399 [2024-07-10 23:42:19.320179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.399 [2024-07-10 23:42:19.320189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.399 [2024-07-10 23:42:19.320381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.399 [2024-07-10 23:42:19.320573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.399 [2024-07-10 23:42:19.320590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.399 [2024-07-10 23:42:19.320598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.399 [2024-07-10 23:42:19.323561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.399 [2024-07-10 23:42:19.332860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.399 [2024-07-10 23:42:19.333339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.399 [2024-07-10 23:42:19.333361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.399 [2024-07-10 23:42:19.333373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.399 [2024-07-10 23:42:19.333556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.399 [2024-07-10 23:42:19.333739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.399 [2024-07-10 23:42:19.333750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.399 [2024-07-10 23:42:19.333758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.399 [2024-07-10 23:42:19.336704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.399 [2024-07-10 23:42:19.345937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.400 [2024-07-10 23:42:19.346408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.400 [2024-07-10 23:42:19.346429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.400 [2024-07-10 23:42:19.346439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.400 [2024-07-10 23:42:19.346632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.400 [2024-07-10 23:42:19.346823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.400 [2024-07-10 23:42:19.346834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.400 [2024-07-10 23:42:19.346842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.400 [2024-07-10 23:42:19.349791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.400 [2024-07-10 23:42:19.359029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.400 [2024-07-10 23:42:19.359485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.400 [2024-07-10 23:42:19.359505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.400 [2024-07-10 23:42:19.359515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.400 [2024-07-10 23:42:19.359706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.400 [2024-07-10 23:42:19.359897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.400 [2024-07-10 23:42:19.359908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.400 [2024-07-10 23:42:19.359916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.400 [2024-07-10 23:42:19.362858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.400 [2024-07-10 23:42:19.372222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.400 [2024-07-10 23:42:19.372707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.400 [2024-07-10 23:42:19.372729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.400 [2024-07-10 23:42:19.372739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.400 [2024-07-10 23:42:19.372931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.400 [2024-07-10 23:42:19.373123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.400 [2024-07-10 23:42:19.373139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.400 [2024-07-10 23:42:19.373147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.400 [2024-07-10 23:42:19.376091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.400 [2024-07-10 23:42:19.385483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.400 [2024-07-10 23:42:19.385984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.400 [2024-07-10 23:42:19.386004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.400 [2024-07-10 23:42:19.386014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.400 [2024-07-10 23:42:19.386212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.400 [2024-07-10 23:42:19.386404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.400 [2024-07-10 23:42:19.386416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.400 [2024-07-10 23:42:19.386424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.400 [2024-07-10 23:42:19.389385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.400 [2024-07-10 23:42:19.398756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.400 [2024-07-10 23:42:19.399250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.400 [2024-07-10 23:42:19.399272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.400 [2024-07-10 23:42:19.399282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.400 [2024-07-10 23:42:19.399476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.400 [2024-07-10 23:42:19.399667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.400 [2024-07-10 23:42:19.399678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.400 [2024-07-10 23:42:19.399687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.400 [2024-07-10 23:42:19.402624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.400 [2024-07-10 23:42:19.411971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.400 [2024-07-10 23:42:19.412406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.400 [2024-07-10 23:42:19.412426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.400 [2024-07-10 23:42:19.412436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.400 [2024-07-10 23:42:19.412628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.400 [2024-07-10 23:42:19.412820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.400 [2024-07-10 23:42:19.412830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.400 [2024-07-10 23:42:19.412839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.400 [2024-07-10 23:42:19.415846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.400 [2024-07-10 23:42:19.425118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.400 [2024-07-10 23:42:19.425543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.400 [2024-07-10 23:42:19.425563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.400 [2024-07-10 23:42:19.425573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.400 [2024-07-10 23:42:19.425754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.400 [2024-07-10 23:42:19.425937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.400 [2024-07-10 23:42:19.425948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.400 [2024-07-10 23:42:19.425956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.400 [2024-07-10 23:42:19.429064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.400 [2024-07-10 23:42:19.438503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.400 [2024-07-10 23:42:19.438977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.400 [2024-07-10 23:42:19.439042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.400 [2024-07-10 23:42:19.439072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.400 [2024-07-10 23:42:19.439604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.400 [2024-07-10 23:42:19.439802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.400 [2024-07-10 23:42:19.439813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.400 [2024-07-10 23:42:19.439822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.400 [2024-07-10 23:42:19.442921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.400 [2024-07-10 23:42:19.451834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.400 [2024-07-10 23:42:19.452338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.400 [2024-07-10 23:42:19.452360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.400 [2024-07-10 23:42:19.452370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.400 [2024-07-10 23:42:19.452568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.400 [2024-07-10 23:42:19.452766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.400 [2024-07-10 23:42:19.452777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.400 [2024-07-10 23:42:19.452786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.400 [2024-07-10 23:42:19.455894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.400 [2024-07-10 23:42:19.465269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.661 [2024-07-10 23:42:19.465679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.661 [2024-07-10 23:42:19.465701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.661 [2024-07-10 23:42:19.465714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.661 [2024-07-10 23:42:19.465912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.661 [2024-07-10 23:42:19.466109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.661 [2024-07-10 23:42:19.466120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.661 [2024-07-10 23:42:19.466128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.661 [2024-07-10 23:42:19.469218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.661 [2024-07-10 23:42:19.478548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.661 [2024-07-10 23:42:19.479070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.661 [2024-07-10 23:42:19.479125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.661 [2024-07-10 23:42:19.479154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.661 [2024-07-10 23:42:19.479666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.661 [2024-07-10 23:42:19.479863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.661 [2024-07-10 23:42:19.479874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.661 [2024-07-10 23:42:19.479883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.661 [2024-07-10 23:42:19.482929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.661 [2024-07-10 23:42:19.491742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.661 [2024-07-10 23:42:19.492273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.661 [2024-07-10 23:42:19.492294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.661 [2024-07-10 23:42:19.492303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.661 [2024-07-10 23:42:19.492497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.661 [2024-07-10 23:42:19.492678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.661 [2024-07-10 23:42:19.492689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.661 [2024-07-10 23:42:19.492696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.661 [2024-07-10 23:42:19.495640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.661 [2024-07-10 23:42:19.505059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.661 [2024-07-10 23:42:19.505498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.662 [2024-07-10 23:42:19.505519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.662 [2024-07-10 23:42:19.505529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.662 [2024-07-10 23:42:19.505720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.662 [2024-07-10 23:42:19.505915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.662 [2024-07-10 23:42:19.505925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.662 [2024-07-10 23:42:19.505934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.662 [2024-07-10 23:42:19.508899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.662 [2024-07-10 23:42:19.518279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.662 [2024-07-10 23:42:19.518616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.662 [2024-07-10 23:42:19.518636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.662 [2024-07-10 23:42:19.518646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.662 [2024-07-10 23:42:19.518838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.662 [2024-07-10 23:42:19.519031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.662 [2024-07-10 23:42:19.519042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.662 [2024-07-10 23:42:19.519050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.662 [2024-07-10 23:42:19.522047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.662 [2024-07-10 23:42:19.531477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.662 [2024-07-10 23:42:19.531955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.662 [2024-07-10 23:42:19.532010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.662 [2024-07-10 23:42:19.532041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.662 [2024-07-10 23:42:19.532651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.662 [2024-07-10 23:42:19.532843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.662 [2024-07-10 23:42:19.532855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.662 [2024-07-10 23:42:19.532863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.662 [2024-07-10 23:42:19.535858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.662 [2024-07-10 23:42:19.544734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.662 [2024-07-10 23:42:19.545158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.662 [2024-07-10 23:42:19.545182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.662 [2024-07-10 23:42:19.545192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.662 [2024-07-10 23:42:19.545384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.662 [2024-07-10 23:42:19.545576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.662 [2024-07-10 23:42:19.545587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.662 [2024-07-10 23:42:19.545595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.662 [2024-07-10 23:42:19.548558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.662 [2024-07-10 23:42:19.558005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.662 [2024-07-10 23:42:19.558445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.662 [2024-07-10 23:42:19.558467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.662 [2024-07-10 23:42:19.558476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.662 [2024-07-10 23:42:19.558674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.662 [2024-07-10 23:42:19.558871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.662 [2024-07-10 23:42:19.558882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.662 [2024-07-10 23:42:19.558890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.662 [2024-07-10 23:42:19.561867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.662 [2024-07-10 23:42:19.571220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.662 [2024-07-10 23:42:19.571614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.662 [2024-07-10 23:42:19.571634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.662 [2024-07-10 23:42:19.571643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.662 [2024-07-10 23:42:19.571835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.662 [2024-07-10 23:42:19.572026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.662 [2024-07-10 23:42:19.572037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.662 [2024-07-10 23:42:19.572046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.662 [2024-07-10 23:42:19.574981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.662 [2024-07-10 23:42:19.584526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.662 [2024-07-10 23:42:19.585042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.662 [2024-07-10 23:42:19.585098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.662 [2024-07-10 23:42:19.585128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.662 [2024-07-10 23:42:19.585782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.662 [2024-07-10 23:42:19.586031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.662 [2024-07-10 23:42:19.586043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.662 [2024-07-10 23:42:19.586052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.662 [2024-07-10 23:42:19.589003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.662 [2024-07-10 23:42:19.597649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.662 [2024-07-10 23:42:19.598137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.662 [2024-07-10 23:42:19.598206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.662 [2024-07-10 23:42:19.598244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.662 [2024-07-10 23:42:19.598719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.662 [2024-07-10 23:42:19.599000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.662 [2024-07-10 23:42:19.599015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.662 [2024-07-10 23:42:19.599027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.662 [2024-07-10 23:42:19.603485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.662 [2024-07-10 23:42:19.611154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.662 [2024-07-10 23:42:19.611581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.662 [2024-07-10 23:42:19.611602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.662 [2024-07-10 23:42:19.611611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.662 [2024-07-10 23:42:19.611804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.662 [2024-07-10 23:42:19.611996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.662 [2024-07-10 23:42:19.612007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.662 [2024-07-10 23:42:19.612015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.662 [2024-07-10 23:42:19.614996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.662 [2024-07-10 23:42:19.624450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.662 [2024-07-10 23:42:19.624783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.662 [2024-07-10 23:42:19.624803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.662 [2024-07-10 23:42:19.624811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.662 [2024-07-10 23:42:19.624993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.662 [2024-07-10 23:42:19.625180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.662 [2024-07-10 23:42:19.625208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.662 [2024-07-10 23:42:19.625217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.662 [2024-07-10 23:42:19.628221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.662 [2024-07-10 23:42:19.637751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.662 [2024-07-10 23:42:19.638257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.662 [2024-07-10 23:42:19.638312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.662 [2024-07-10 23:42:19.638342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.662 [2024-07-10 23:42:19.638965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.662 [2024-07-10 23:42:19.639259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.662 [2024-07-10 23:42:19.639275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.662 [2024-07-10 23:42:19.639287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.662 [2024-07-10 23:42:19.643738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.662 [2024-07-10 23:42:19.651377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.662 [2024-07-10 23:42:19.651880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.663 [2024-07-10 23:42:19.651899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.663 [2024-07-10 23:42:19.651909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.663 [2024-07-10 23:42:19.652100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.663 [2024-07-10 23:42:19.652299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.663 [2024-07-10 23:42:19.652311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.663 [2024-07-10 23:42:19.652319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.663 [2024-07-10 23:42:19.655347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.663 [2024-07-10 23:42:19.664602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.663 [2024-07-10 23:42:19.665076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.663 [2024-07-10 23:42:19.665141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.663 [2024-07-10 23:42:19.665184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.663 [2024-07-10 23:42:19.665826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.663 [2024-07-10 23:42:19.666343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.663 [2024-07-10 23:42:19.666354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.663 [2024-07-10 23:42:19.666363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.663 [2024-07-10 23:42:19.669294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.663 [2024-07-10 23:42:19.677877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.663 [2024-07-10 23:42:19.678346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.663 [2024-07-10 23:42:19.678403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.663 [2024-07-10 23:42:19.678434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.663 [2024-07-10 23:42:19.679038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.663 [2024-07-10 23:42:19.679325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.663 [2024-07-10 23:42:19.679342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.663 [2024-07-10 23:42:19.679353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.663 [2024-07-10 23:42:19.683811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.663 [2024-07-10 23:42:19.691810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.663 [2024-07-10 23:42:19.692243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.663 [2024-07-10 23:42:19.692264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.663 [2024-07-10 23:42:19.692273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.663 [2024-07-10 23:42:19.692474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.663 [2024-07-10 23:42:19.692660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.663 [2024-07-10 23:42:19.692670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.663 [2024-07-10 23:42:19.692678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.663 [2024-07-10 23:42:19.695674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.663 [2024-07-10 23:42:19.704976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.663 [2024-07-10 23:42:19.705467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.663 [2024-07-10 23:42:19.705487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.663 [2024-07-10 23:42:19.705496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.663 [2024-07-10 23:42:19.705693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.663 [2024-07-10 23:42:19.705891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.663 [2024-07-10 23:42:19.705902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.663 [2024-07-10 23:42:19.705910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.663 [2024-07-10 23:42:19.709017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.663 [2024-07-10 23:42:19.718338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.663 [2024-07-10 23:42:19.718825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.663 [2024-07-10 23:42:19.718845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.663 [2024-07-10 23:42:19.718855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.663 [2024-07-10 23:42:19.719047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.663 [2024-07-10 23:42:19.719244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.663 [2024-07-10 23:42:19.719256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.663 [2024-07-10 23:42:19.719264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.663 [2024-07-10 23:42:19.722321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.924 [2024-07-10 23:42:19.731765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.924 [2024-07-10 23:42:19.732335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.924 [2024-07-10 23:42:19.732399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.924 [2024-07-10 23:42:19.732430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.924 [2024-07-10 23:42:19.733073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.924 [2024-07-10 23:42:19.733479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.924 [2024-07-10 23:42:19.733491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.924 [2024-07-10 23:42:19.733499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.924 [2024-07-10 23:42:19.736523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.924 [2024-07-10 23:42:19.745044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.924 [2024-07-10 23:42:19.745504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.924 [2024-07-10 23:42:19.745525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.924 [2024-07-10 23:42:19.745535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.924 [2024-07-10 23:42:19.745727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.924 [2024-07-10 23:42:19.745919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.924 [2024-07-10 23:42:19.745930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.924 [2024-07-10 23:42:19.745938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.924 [2024-07-10 23:42:19.748936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.924 [2024-07-10 23:42:19.758296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.924 [2024-07-10 23:42:19.758708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.924 [2024-07-10 23:42:19.758728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.924 [2024-07-10 23:42:19.758737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.924 [2024-07-10 23:42:19.758930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.924 [2024-07-10 23:42:19.759122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.924 [2024-07-10 23:42:19.759133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.924 [2024-07-10 23:42:19.759142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.924 [2024-07-10 23:42:19.762083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.924 [2024-07-10 23:42:19.771622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.924 [2024-07-10 23:42:19.772171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.924 [2024-07-10 23:42:19.772228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.924 [2024-07-10 23:42:19.772259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.924 [2024-07-10 23:42:19.772902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.924 [2024-07-10 23:42:19.773421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.924 [2024-07-10 23:42:19.773432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.924 [2024-07-10 23:42:19.773440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.924 [2024-07-10 23:42:19.776398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.924 [2024-07-10 23:42:19.784855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.924 [2024-07-10 23:42:19.785371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.924 [2024-07-10 23:42:19.785428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.924 [2024-07-10 23:42:19.785459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.924 [2024-07-10 23:42:19.786104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.924 [2024-07-10 23:42:19.786467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.924 [2024-07-10 23:42:19.786478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.924 [2024-07-10 23:42:19.786486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.924 [2024-07-10 23:42:19.789451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.924 [2024-07-10 23:42:19.798094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.924 [2024-07-10 23:42:19.798525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.924 [2024-07-10 23:42:19.798547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.924 [2024-07-10 23:42:19.798557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.924 [2024-07-10 23:42:19.798750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.924 [2024-07-10 23:42:19.798941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.924 [2024-07-10 23:42:19.798952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.924 [2024-07-10 23:42:19.798961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.924 [2024-07-10 23:42:19.801896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.924 [2024-07-10 23:42:19.811380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.924 [2024-07-10 23:42:19.811796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.924 [2024-07-10 23:42:19.811816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.924 [2024-07-10 23:42:19.811826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.924 [2024-07-10 23:42:19.812017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.924 [2024-07-10 23:42:19.812224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.924 [2024-07-10 23:42:19.812236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.924 [2024-07-10 23:42:19.812245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.924 [2024-07-10 23:42:19.815263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.924 [2024-07-10 23:42:19.824669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.924 [2024-07-10 23:42:19.825148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.924 [2024-07-10 23:42:19.825222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.924 [2024-07-10 23:42:19.825254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.924 [2024-07-10 23:42:19.825773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.924 [2024-07-10 23:42:19.825964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.924 [2024-07-10 23:42:19.825975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.924 [2024-07-10 23:42:19.825984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.925 [2024-07-10 23:42:19.828919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.925 [2024-07-10 23:42:19.838144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.925 [2024-07-10 23:42:19.838631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.925 [2024-07-10 23:42:19.838692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.925 [2024-07-10 23:42:19.838722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.925 [2024-07-10 23:42:19.839376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.925 [2024-07-10 23:42:19.839596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.925 [2024-07-10 23:42:19.839606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.925 [2024-07-10 23:42:19.839615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.925 [2024-07-10 23:42:19.842580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.925 [2024-07-10 23:42:19.851435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.925 [2024-07-10 23:42:19.851908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.925 [2024-07-10 23:42:19.851929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.925 [2024-07-10 23:42:19.851938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.925 [2024-07-10 23:42:19.852131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.925 [2024-07-10 23:42:19.852327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.925 [2024-07-10 23:42:19.852338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.925 [2024-07-10 23:42:19.852346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.925 [2024-07-10 23:42:19.855356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.925 [2024-07-10 23:42:19.864704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.925 [2024-07-10 23:42:19.865183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.925 [2024-07-10 23:42:19.865207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.925 [2024-07-10 23:42:19.865216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.925 [2024-07-10 23:42:19.865416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.925 [2024-07-10 23:42:19.865597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.925 [2024-07-10 23:42:19.865607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.925 [2024-07-10 23:42:19.865614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.925 [2024-07-10 23:42:19.868611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.925 [2024-07-10 23:42:19.877879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.925 [2024-07-10 23:42:19.878355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.925 [2024-07-10 23:42:19.878375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.925 [2024-07-10 23:42:19.878385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.925 [2024-07-10 23:42:19.878598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.925 [2024-07-10 23:42:19.878795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.925 [2024-07-10 23:42:19.878806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.925 [2024-07-10 23:42:19.878814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.925 [2024-07-10 23:42:19.881919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.925 [2024-07-10 23:42:19.890958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.925 [2024-07-10 23:42:19.891425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.925 [2024-07-10 23:42:19.891481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.925 [2024-07-10 23:42:19.891526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.925 [2024-07-10 23:42:19.891948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.925 [2024-07-10 23:42:19.892140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.925 [2024-07-10 23:42:19.892151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.925 [2024-07-10 23:42:19.892166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.925 [2024-07-10 23:42:19.895093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.925 [2024-07-10 23:42:19.904154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.925 [2024-07-10 23:42:19.904607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.925 [2024-07-10 23:42:19.904627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.925 [2024-07-10 23:42:19.904636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.925 [2024-07-10 23:42:19.904818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.925 [2024-07-10 23:42:19.905002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.925 [2024-07-10 23:42:19.905012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.925 [2024-07-10 23:42:19.905020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.925 [2024-07-10 23:42:19.907964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.925 [2024-07-10 23:42:19.917376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.925 [2024-07-10 23:42:19.917869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.925 [2024-07-10 23:42:19.917924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.925 [2024-07-10 23:42:19.917954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.925 [2024-07-10 23:42:19.918611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.925 [2024-07-10 23:42:19.918982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.925 [2024-07-10 23:42:19.918993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.925 [2024-07-10 23:42:19.919001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.925 [2024-07-10 23:42:19.921932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.925 [2024-07-10 23:42:19.930446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.925 [2024-07-10 23:42:19.930911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.925 [2024-07-10 23:42:19.930967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.925 [2024-07-10 23:42:19.930998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.925 [2024-07-10 23:42:19.931654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.925 [2024-07-10 23:42:19.932075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.925 [2024-07-10 23:42:19.932091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.925 [2024-07-10 23:42:19.932103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.925 [2024-07-10 23:42:19.936560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.925 [2024-07-10 23:42:19.944077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.925 [2024-07-10 23:42:19.944558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.925 [2024-07-10 23:42:19.944577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.925 [2024-07-10 23:42:19.944587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.925 [2024-07-10 23:42:19.944778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.925 [2024-07-10 23:42:19.944970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.925 [2024-07-10 23:42:19.944981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.925 [2024-07-10 23:42:19.944992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.925 [2024-07-10 23:42:19.948011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.925 [2024-07-10 23:42:19.957145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.925 [2024-07-10 23:42:19.957647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.925 [2024-07-10 23:42:19.957669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.925 [2024-07-10 23:42:19.957678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.925 [2024-07-10 23:42:19.957877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.925 [2024-07-10 23:42:19.958074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.925 [2024-07-10 23:42:19.958085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.925 [2024-07-10 23:42:19.958098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.925 [2024-07-10 23:42:19.961328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.925 [2024-07-10 23:42:19.970464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.925 [2024-07-10 23:42:19.970950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.925 [2024-07-10 23:42:19.970971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.925 [2024-07-10 23:42:19.970980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.925 [2024-07-10 23:42:19.971183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.925 [2024-07-10 23:42:19.971390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.926 [2024-07-10 23:42:19.971401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.926 [2024-07-10 23:42:19.971409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.926 [2024-07-10 23:42:19.974421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:10.926 [2024-07-10 23:42:19.983601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:10.926 [2024-07-10 23:42:19.984074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.926 [2024-07-10 23:42:19.984139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:10.926 [2024-07-10 23:42:19.984184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:10.926 [2024-07-10 23:42:19.984720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:10.926 [2024-07-10 23:42:19.984917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:10.926 [2024-07-10 23:42:19.984928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:10.926 [2024-07-10 23:42:19.984937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:10.926 [2024-07-10 23:42:19.988007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.187 [2024-07-10 23:42:19.996986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.187 [2024-07-10 23:42:19.997451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.187 [2024-07-10 23:42:19.997477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.187 [2024-07-10 23:42:19.997487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.187 [2024-07-10 23:42:19.997679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.187 [2024-07-10 23:42:19.997870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.187 [2024-07-10 23:42:19.997881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.187 [2024-07-10 23:42:19.997890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.187 [2024-07-10 23:42:20.000840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.187 [2024-07-10 23:42:20.010404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.187 [2024-07-10 23:42:20.010875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.187 [2024-07-10 23:42:20.010896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.187 [2024-07-10 23:42:20.010906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.187 [2024-07-10 23:42:20.011104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.187 [2024-07-10 23:42:20.011309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.187 [2024-07-10 23:42:20.011321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.187 [2024-07-10 23:42:20.011329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.187 [2024-07-10 23:42:20.014426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.187 [2024-07-10 23:42:20.024916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.187 [2024-07-10 23:42:20.025441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.187 [2024-07-10 23:42:20.025464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.187 [2024-07-10 23:42:20.025475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.188 [2024-07-10 23:42:20.025693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.188 [2024-07-10 23:42:20.025914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.188 [2024-07-10 23:42:20.025927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.188 [2024-07-10 23:42:20.025935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.188 [2024-07-10 23:42:20.029017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.188 [2024-07-10 23:42:20.038366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.188 [2024-07-10 23:42:20.038778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.188 [2024-07-10 23:42:20.038800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.188 [2024-07-10 23:42:20.038810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.188 [2024-07-10 23:42:20.039006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.188 [2024-07-10 23:42:20.039224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.188 [2024-07-10 23:42:20.039237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.188 [2024-07-10 23:42:20.039245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.188 [2024-07-10 23:42:20.042310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.188 [2024-07-10 23:42:20.051612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.188 [2024-07-10 23:42:20.052095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.188 [2024-07-10 23:42:20.052115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.188 [2024-07-10 23:42:20.052125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.188 [2024-07-10 23:42:20.052324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.188 [2024-07-10 23:42:20.052516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.188 [2024-07-10 23:42:20.052527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.188 [2024-07-10 23:42:20.052536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.188 [2024-07-10 23:42:20.055601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.188 [2024-07-10 23:42:20.064759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.188 [2024-07-10 23:42:20.065281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.188 [2024-07-10 23:42:20.065338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.188 [2024-07-10 23:42:20.065369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.188 [2024-07-10 23:42:20.065943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.188 [2024-07-10 23:42:20.066135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.188 [2024-07-10 23:42:20.066146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.188 [2024-07-10 23:42:20.066154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.188 [2024-07-10 23:42:20.069089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.188 [2024-07-10 23:42:20.077986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.188 [2024-07-10 23:42:20.078385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.188 [2024-07-10 23:42:20.078406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.188 [2024-07-10 23:42:20.078416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.188 [2024-07-10 23:42:20.078609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.188 [2024-07-10 23:42:20.078808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.188 [2024-07-10 23:42:20.078819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.188 [2024-07-10 23:42:20.078831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.188 [2024-07-10 23:42:20.081776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.188 [2024-07-10 23:42:20.091112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.188 [2024-07-10 23:42:20.091590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.188 [2024-07-10 23:42:20.091611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.188 [2024-07-10 23:42:20.091621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.188 [2024-07-10 23:42:20.091812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.188 [2024-07-10 23:42:20.092004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.188 [2024-07-10 23:42:20.092015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.188 [2024-07-10 23:42:20.092023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.188 [2024-07-10 23:42:20.094958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.188 [2024-07-10 23:42:20.104190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.188 [2024-07-10 23:42:20.104692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.188 [2024-07-10 23:42:20.104748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.188 [2024-07-10 23:42:20.104777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.188 [2024-07-10 23:42:20.105298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.188 [2024-07-10 23:42:20.105491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.188 [2024-07-10 23:42:20.105502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.188 [2024-07-10 23:42:20.105510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.188 [2024-07-10 23:42:20.108438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.188 [2024-07-10 23:42:20.117397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.188 [2024-07-10 23:42:20.117855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.188 [2024-07-10 23:42:20.117874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.188 [2024-07-10 23:42:20.117884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.188 [2024-07-10 23:42:20.118064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.188 [2024-07-10 23:42:20.118271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.188 [2024-07-10 23:42:20.118283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.188 [2024-07-10 23:42:20.118291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.188 [2024-07-10 23:42:20.121209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.188 [2024-07-10 23:42:20.130488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.188 [2024-07-10 23:42:20.130942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.188 [2024-07-10 23:42:20.130961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.188 [2024-07-10 23:42:20.130970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.188 [2024-07-10 23:42:20.131151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.188 [2024-07-10 23:42:20.131364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.188 [2024-07-10 23:42:20.131375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.188 [2024-07-10 23:42:20.131384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.188 [2024-07-10 23:42:20.134295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.188 [2024-07-10 23:42:20.143578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.188 [2024-07-10 23:42:20.144042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.188 [2024-07-10 23:42:20.144098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.188 [2024-07-10 23:42:20.144128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.188 [2024-07-10 23:42:20.144569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.188 [2024-07-10 23:42:20.144762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.188 [2024-07-10 23:42:20.144772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.188 [2024-07-10 23:42:20.144781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.188 [2024-07-10 23:42:20.147712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.188 [2024-07-10 23:42:20.156876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.188 [2024-07-10 23:42:20.157321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.188 [2024-07-10 23:42:20.157342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.188 [2024-07-10 23:42:20.157352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.188 [2024-07-10 23:42:20.157550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.188 [2024-07-10 23:42:20.157747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.188 [2024-07-10 23:42:20.157759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.188 [2024-07-10 23:42:20.157769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.188 [2024-07-10 23:42:20.160881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.188 [2024-07-10 23:42:20.170251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.188 [2024-07-10 23:42:20.170753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.189 [2024-07-10 23:42:20.170809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.189 [2024-07-10 23:42:20.170840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.189 [2024-07-10 23:42:20.171361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.189 [2024-07-10 23:42:20.171559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.189 [2024-07-10 23:42:20.171570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.189 [2024-07-10 23:42:20.171579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.189 [2024-07-10 23:42:20.174689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.189 [2024-07-10 23:42:20.183689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.189 [2024-07-10 23:42:20.184201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.189 [2024-07-10 23:42:20.184257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.189 [2024-07-10 23:42:20.184288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.189 [2024-07-10 23:42:20.184934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.189 [2024-07-10 23:42:20.185532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.189 [2024-07-10 23:42:20.185544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.189 [2024-07-10 23:42:20.185553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.189 [2024-07-10 23:42:20.188662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.189 [2024-07-10 23:42:20.197051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.189 [2024-07-10 23:42:20.197527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.189 [2024-07-10 23:42:20.197548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.189 [2024-07-10 23:42:20.197557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.189 [2024-07-10 23:42:20.197750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.189 [2024-07-10 23:42:20.197942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.189 [2024-07-10 23:42:20.197954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.189 [2024-07-10 23:42:20.197964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.189 [2024-07-10 23:42:20.200959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.189 [2024-07-10 23:42:20.210403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.189 [2024-07-10 23:42:20.210904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.189 [2024-07-10 23:42:20.210925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.189 [2024-07-10 23:42:20.210936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.189 [2024-07-10 23:42:20.211134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.189 [2024-07-10 23:42:20.211339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.189 [2024-07-10 23:42:20.211352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.189 [2024-07-10 23:42:20.211364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.189 [2024-07-10 23:42:20.214478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.189 [2024-07-10 23:42:20.223842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.189 [2024-07-10 23:42:20.224353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.189 [2024-07-10 23:42:20.224374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.189 [2024-07-10 23:42:20.224384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.189 [2024-07-10 23:42:20.224577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.189 [2024-07-10 23:42:20.224769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.189 [2024-07-10 23:42:20.224780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.189 [2024-07-10 23:42:20.224789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.189 [2024-07-10 23:42:20.227872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.189 [2024-07-10 23:42:20.237121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.189 [2024-07-10 23:42:20.237604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.189 [2024-07-10 23:42:20.237625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.189 [2024-07-10 23:42:20.237635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.189 [2024-07-10 23:42:20.237833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.189 [2024-07-10 23:42:20.238031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.189 [2024-07-10 23:42:20.238042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.189 [2024-07-10 23:42:20.238051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.189 [2024-07-10 23:42:20.241115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.189 [2024-07-10 23:42:20.250540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.189 [2024-07-10 23:42:20.251013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.189 [2024-07-10 23:42:20.251034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.189 [2024-07-10 23:42:20.251045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.189 [2024-07-10 23:42:20.251250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.189 [2024-07-10 23:42:20.251448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.189 [2024-07-10 23:42:20.251460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.189 [2024-07-10 23:42:20.251469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.450 [2024-07-10 23:42:20.254578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.450 [2024-07-10 23:42:20.263823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.450 [2024-07-10 23:42:20.264311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.450 [2024-07-10 23:42:20.264331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.450 [2024-07-10 23:42:20.264341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.450 [2024-07-10 23:42:20.264533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.450 [2024-07-10 23:42:20.264725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.450 [2024-07-10 23:42:20.264736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.450 [2024-07-10 23:42:20.264751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.450 [2024-07-10 23:42:20.267829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.450 [2024-07-10 23:42:20.277282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.450 [2024-07-10 23:42:20.277738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.450 [2024-07-10 23:42:20.277800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.450 [2024-07-10 23:42:20.277830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.450 [2024-07-10 23:42:20.278465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.450 [2024-07-10 23:42:20.278657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.450 [2024-07-10 23:42:20.278668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.451 [2024-07-10 23:42:20.278676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.451 [2024-07-10 23:42:20.281741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.451 [2024-07-10 23:42:20.290462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.451 [2024-07-10 23:42:20.290987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.451 [2024-07-10 23:42:20.291042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.451 [2024-07-10 23:42:20.291072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.451 [2024-07-10 23:42:20.291568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.451 [2024-07-10 23:42:20.291761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.451 [2024-07-10 23:42:20.291772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.451 [2024-07-10 23:42:20.291781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.451 [2024-07-10 23:42:20.294756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.451 [2024-07-10 23:42:20.303619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.451 [2024-07-10 23:42:20.304115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.451 [2024-07-10 23:42:20.304135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.451 [2024-07-10 23:42:20.304145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.451 [2024-07-10 23:42:20.304346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.451 [2024-07-10 23:42:20.304538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.451 [2024-07-10 23:42:20.304548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.451 [2024-07-10 23:42:20.304557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.451 [2024-07-10 23:42:20.307526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.451 [2024-07-10 23:42:20.316754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.451 [2024-07-10 23:42:20.317242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.451 [2024-07-10 23:42:20.317297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.451 [2024-07-10 23:42:20.317328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.451 [2024-07-10 23:42:20.317581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.451 [2024-07-10 23:42:20.317762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.451 [2024-07-10 23:42:20.317772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.451 [2024-07-10 23:42:20.317781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.451 [2024-07-10 23:42:20.320724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.451 [2024-07-10 23:42:20.329890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.451 [2024-07-10 23:42:20.330383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.451 [2024-07-10 23:42:20.330403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.451 [2024-07-10 23:42:20.330413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.451 [2024-07-10 23:42:20.330596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.451 [2024-07-10 23:42:20.330778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.451 [2024-07-10 23:42:20.330789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.451 [2024-07-10 23:42:20.330797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.451 [2024-07-10 23:42:20.333739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.451 [2024-07-10 23:42:20.342987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.451 [2024-07-10 23:42:20.343416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.451 [2024-07-10 23:42:20.343437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.451 [2024-07-10 23:42:20.343447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.451 [2024-07-10 23:42:20.343639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.451 [2024-07-10 23:42:20.343831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.451 [2024-07-10 23:42:20.343842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.451 [2024-07-10 23:42:20.343853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.451 [2024-07-10 23:42:20.346799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.451 [2024-07-10 23:42:20.356207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.451 [2024-07-10 23:42:20.356727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.451 [2024-07-10 23:42:20.356784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.451 [2024-07-10 23:42:20.356815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.451 [2024-07-10 23:42:20.357475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.451 [2024-07-10 23:42:20.357954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.451 [2024-07-10 23:42:20.357965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.451 [2024-07-10 23:42:20.357974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.451 [2024-07-10 23:42:20.360853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.451 [2024-07-10 23:42:20.369438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.451 [2024-07-10 23:42:20.369945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.451 [2024-07-10 23:42:20.369966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.451 [2024-07-10 23:42:20.369976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.451 [2024-07-10 23:42:20.370175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.451 [2024-07-10 23:42:20.370367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.451 [2024-07-10 23:42:20.370378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.451 [2024-07-10 23:42:20.370387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.451 [2024-07-10 23:42:20.373320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.451 [2024-07-10 23:42:20.382529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.451 [2024-07-10 23:42:20.382953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.451 [2024-07-10 23:42:20.383010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.451 [2024-07-10 23:42:20.383040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.451 [2024-07-10 23:42:20.383679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.451 [2024-07-10 23:42:20.383871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.451 [2024-07-10 23:42:20.383882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.451 [2024-07-10 23:42:20.383895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.451 [2024-07-10 23:42:20.386866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.451 [2024-07-10 23:42:20.395704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.451 [2024-07-10 23:42:20.396112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.451 [2024-07-10 23:42:20.396133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.451 [2024-07-10 23:42:20.396142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.451 [2024-07-10 23:42:20.396356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.451 [2024-07-10 23:42:20.396548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.451 [2024-07-10 23:42:20.396560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.451 [2024-07-10 23:42:20.396568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.451 [2024-07-10 23:42:20.399501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.451 [2024-07-10 23:42:20.408872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.451 [2024-07-10 23:42:20.409372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.451 [2024-07-10 23:42:20.409392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.451 [2024-07-10 23:42:20.409401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.451 [2024-07-10 23:42:20.409583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.451 [2024-07-10 23:42:20.409765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.451 [2024-07-10 23:42:20.409775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.451 [2024-07-10 23:42:20.409783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.451 [2024-07-10 23:42:20.412732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.451 [2024-07-10 23:42:20.421998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.451 [2024-07-10 23:42:20.422400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.451 [2024-07-10 23:42:20.422421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.451 [2024-07-10 23:42:20.422431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.451 [2024-07-10 23:42:20.422623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.451 [2024-07-10 23:42:20.422815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.452 [2024-07-10 23:42:20.422825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.452 [2024-07-10 23:42:20.422835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.452 [2024-07-10 23:42:20.425781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.452 [2024-07-10 23:42:20.435155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.452 [2024-07-10 23:42:20.435641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.452 [2024-07-10 23:42:20.435660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.452 [2024-07-10 23:42:20.435670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.452 [2024-07-10 23:42:20.435855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.452 [2024-07-10 23:42:20.436036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.452 [2024-07-10 23:42:20.436046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.452 [2024-07-10 23:42:20.436054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.452 [2024-07-10 23:42:20.439078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.452 [2024-07-10 23:42:20.448302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.452 [2024-07-10 23:42:20.448828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.452 [2024-07-10 23:42:20.448883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.452 [2024-07-10 23:42:20.448913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.452 [2024-07-10 23:42:20.449576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.452 [2024-07-10 23:42:20.450217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.452 [2024-07-10 23:42:20.450228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.452 [2024-07-10 23:42:20.450237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.452 [2024-07-10 23:42:20.453098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.452 [2024-07-10 23:42:20.461427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.452 [2024-07-10 23:42:20.461930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.452 [2024-07-10 23:42:20.461950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.452 [2024-07-10 23:42:20.461960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.452 [2024-07-10 23:42:20.462152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.452 [2024-07-10 23:42:20.462371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.452 [2024-07-10 23:42:20.462382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.452 [2024-07-10 23:42:20.462391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.452 [2024-07-10 23:42:20.465497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.452 [2024-07-10 23:42:20.474866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.452 [2024-07-10 23:42:20.475252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.452 [2024-07-10 23:42:20.475273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.452 [2024-07-10 23:42:20.475283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.452 [2024-07-10 23:42:20.475491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.452 [2024-07-10 23:42:20.475684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.452 [2024-07-10 23:42:20.475695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.452 [2024-07-10 23:42:20.475707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.452 [2024-07-10 23:42:20.478752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.452 [2024-07-10 23:42:20.488020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.452 [2024-07-10 23:42:20.488520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.452 [2024-07-10 23:42:20.488576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.452 [2024-07-10 23:42:20.488607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.452 [2024-07-10 23:42:20.489065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.452 [2024-07-10 23:42:20.489262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.452 [2024-07-10 23:42:20.489274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.452 [2024-07-10 23:42:20.489282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.452 [2024-07-10 23:42:20.492213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.452 [2024-07-10 23:42:20.501080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.452 [2024-07-10 23:42:20.501485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.452 [2024-07-10 23:42:20.501507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.452 [2024-07-10 23:42:20.501517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.452 [2024-07-10 23:42:20.501708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.452 [2024-07-10 23:42:20.501900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.452 [2024-07-10 23:42:20.501911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.452 [2024-07-10 23:42:20.501919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.452 [2024-07-10 23:42:20.504857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.452 [2024-07-10 23:42:20.514496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.452 [2024-07-10 23:42:20.515008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.452 [2024-07-10 23:42:20.515063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.452 [2024-07-10 23:42:20.515094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.452 [2024-07-10 23:42:20.515520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.452 [2024-07-10 23:42:20.515717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.452 [2024-07-10 23:42:20.515728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.452 [2024-07-10 23:42:20.515737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.714 [2024-07-10 23:42:20.518801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.714 [2024-07-10 23:42:20.527691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.714 [2024-07-10 23:42:20.528206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.714 [2024-07-10 23:42:20.528260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.714 [2024-07-10 23:42:20.528291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.714 [2024-07-10 23:42:20.528935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.714 [2024-07-10 23:42:20.529246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.714 [2024-07-10 23:42:20.529257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.714 [2024-07-10 23:42:20.529266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.714 [2024-07-10 23:42:20.532193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.714 [2024-07-10 23:42:20.540813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.714 [2024-07-10 23:42:20.541292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.714 [2024-07-10 23:42:20.541313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.714 [2024-07-10 23:42:20.541322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.714 [2024-07-10 23:42:20.541503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.714 [2024-07-10 23:42:20.541684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.714 [2024-07-10 23:42:20.541695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.714 [2024-07-10 23:42:20.541702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.714 [2024-07-10 23:42:20.544643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.714 [2024-07-10 23:42:20.553908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.714 [2024-07-10 23:42:20.554298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.714 [2024-07-10 23:42:20.554319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.714 [2024-07-10 23:42:20.554328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.714 [2024-07-10 23:42:20.554508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.714 [2024-07-10 23:42:20.554689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.714 [2024-07-10 23:42:20.554700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.714 [2024-07-10 23:42:20.554708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.714 [2024-07-10 23:42:20.557652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.714 [2024-07-10 23:42:20.567080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.714 [2024-07-10 23:42:20.567608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.714 [2024-07-10 23:42:20.567664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.714 [2024-07-10 23:42:20.567694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.714 [2024-07-10 23:42:20.568195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.714 [2024-07-10 23:42:20.568440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.714 [2024-07-10 23:42:20.568456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.714 [2024-07-10 23:42:20.568468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.714 [2024-07-10 23:42:20.572913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.714 [2024-07-10 23:42:20.580677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.714 [2024-07-10 23:42:20.581165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.714 [2024-07-10 23:42:20.581201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.714 [2024-07-10 23:42:20.581211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.714 [2024-07-10 23:42:20.581403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.714 [2024-07-10 23:42:20.581594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.714 [2024-07-10 23:42:20.581605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.714 [2024-07-10 23:42:20.581614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.714 [2024-07-10 23:42:20.584603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.714 [2024-07-10 23:42:20.593801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.714 [2024-07-10 23:42:20.594215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.714 [2024-07-10 23:42:20.594236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.714 [2024-07-10 23:42:20.594246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.714 [2024-07-10 23:42:20.594438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.714 [2024-07-10 23:42:20.594629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.714 [2024-07-10 23:42:20.594640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.714 [2024-07-10 23:42:20.594648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.714 [2024-07-10 23:42:20.597600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.714 [2024-07-10 23:42:20.606997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.714 [2024-07-10 23:42:20.607500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.714 [2024-07-10 23:42:20.607521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.714 [2024-07-10 23:42:20.607531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.714 [2024-07-10 23:42:20.607723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.714 [2024-07-10 23:42:20.607915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.714 [2024-07-10 23:42:20.607929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.714 [2024-07-10 23:42:20.607937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.714 [2024-07-10 23:42:20.610878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.714 [2024-07-10 23:42:20.620111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.715 [2024-07-10 23:42:20.620598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.715 [2024-07-10 23:42:20.620653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.715 [2024-07-10 23:42:20.620683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.715 [2024-07-10 23:42:20.621222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.715 [2024-07-10 23:42:20.621415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.715 [2024-07-10 23:42:20.621426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.715 [2024-07-10 23:42:20.621434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.715 [2024-07-10 23:42:20.624365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.715 [2024-07-10 23:42:20.633240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.715 [2024-07-10 23:42:20.633741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.715 [2024-07-10 23:42:20.633797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.715 [2024-07-10 23:42:20.633827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.715 [2024-07-10 23:42:20.634312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.715 [2024-07-10 23:42:20.634504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.715 [2024-07-10 23:42:20.634515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.715 [2024-07-10 23:42:20.634524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.715 [2024-07-10 23:42:20.637427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.715 [2024-07-10 23:42:20.646424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.715 [2024-07-10 23:42:20.646906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.715 [2024-07-10 23:42:20.646931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.715 [2024-07-10 23:42:20.646940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.715 [2024-07-10 23:42:20.647121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.715 [2024-07-10 23:42:20.647332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.715 [2024-07-10 23:42:20.647343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.715 [2024-07-10 23:42:20.647353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.715 [2024-07-10 23:42:20.650281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.715 [2024-07-10 23:42:20.659577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.715 [2024-07-10 23:42:20.660028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.715 [2024-07-10 23:42:20.660047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.715 [2024-07-10 23:42:20.660056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.715 [2024-07-10 23:42:20.660262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.715 [2024-07-10 23:42:20.660454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.715 [2024-07-10 23:42:20.660465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.715 [2024-07-10 23:42:20.660473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.715 [2024-07-10 23:42:20.663403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.715 [2024-07-10 23:42:20.672775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.715 [2024-07-10 23:42:20.673280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.715 [2024-07-10 23:42:20.673349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.715 [2024-07-10 23:42:20.673381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.715 [2024-07-10 23:42:20.673886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.715 [2024-07-10 23:42:20.674067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.715 [2024-07-10 23:42:20.674077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.715 [2024-07-10 23:42:20.674085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.715 [2024-07-10 23:42:20.677030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.715 [2024-07-10 23:42:20.685932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.715 [2024-07-10 23:42:20.686449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.715 [2024-07-10 23:42:20.686505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.715 [2024-07-10 23:42:20.686535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.715 [2024-07-10 23:42:20.686963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.715 [2024-07-10 23:42:20.687154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.715 [2024-07-10 23:42:20.687171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.715 [2024-07-10 23:42:20.687180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.715 [2024-07-10 23:42:20.690175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.715 [2024-07-10 23:42:20.699052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.715 [2024-07-10 23:42:20.699568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.715 [2024-07-10 23:42:20.699624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.715 [2024-07-10 23:42:20.699661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.715 [2024-07-10 23:42:20.700322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.715 [2024-07-10 23:42:20.700811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.715 [2024-07-10 23:42:20.700822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.715 [2024-07-10 23:42:20.700831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.715 [2024-07-10 23:42:20.703759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.715 [2024-07-10 23:42:20.712277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.715 [2024-07-10 23:42:20.712695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.715 [2024-07-10 23:42:20.712716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.715 [2024-07-10 23:42:20.712726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.715 [2024-07-10 23:42:20.712917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.715 [2024-07-10 23:42:20.713108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.715 [2024-07-10 23:42:20.713119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.715 [2024-07-10 23:42:20.713127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.715 [2024-07-10 23:42:20.716252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.715 [2024-07-10 23:42:20.725722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.715 [2024-07-10 23:42:20.726210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.715 [2024-07-10 23:42:20.726231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.715 [2024-07-10 23:42:20.726240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.715 [2024-07-10 23:42:20.726433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.715 [2024-07-10 23:42:20.726625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.715 [2024-07-10 23:42:20.726636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.715 [2024-07-10 23:42:20.726644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.715 [2024-07-10 23:42:20.729665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.715 [2024-07-10 23:42:20.738893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.715 [2024-07-10 23:42:20.739420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.715 [2024-07-10 23:42:20.739476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.715 [2024-07-10 23:42:20.739506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.715 [2024-07-10 23:42:20.739911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.715 [2024-07-10 23:42:20.740117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.715 [2024-07-10 23:42:20.740136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.715 [2024-07-10 23:42:20.740149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.715 [2024-07-10 23:42:20.744603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.715 [2024-07-10 23:42:20.752511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.715 [2024-07-10 23:42:20.753022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.715 [2024-07-10 23:42:20.753077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.715 [2024-07-10 23:42:20.753106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.715 [2024-07-10 23:42:20.753768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.715 [2024-07-10 23:42:20.754104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.715 [2024-07-10 23:42:20.754115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.715 [2024-07-10 23:42:20.754123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.716 [2024-07-10 23:42:20.757098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.716 [2024-07-10 23:42:20.765596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.716 [2024-07-10 23:42:20.765982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.716 [2024-07-10 23:42:20.766002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.716 [2024-07-10 23:42:20.766012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.716 [2024-07-10 23:42:20.766209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.716 [2024-07-10 23:42:20.766401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.716 [2024-07-10 23:42:20.766412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.716 [2024-07-10 23:42:20.766420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.716 [2024-07-10 23:42:20.769354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.716 [2024-07-10 23:42:20.778821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.977 [2024-07-10 23:42:20.779304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.977 [2024-07-10 23:42:20.779326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.977 [2024-07-10 23:42:20.779336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.977 [2024-07-10 23:42:20.779529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.977 [2024-07-10 23:42:20.779722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.977 [2024-07-10 23:42:20.779733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.977 [2024-07-10 23:42:20.779742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.977 [2024-07-10 23:42:20.782747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.977 [2024-07-10 23:42:20.791995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.977 [2024-07-10 23:42:20.792502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.977 [2024-07-10 23:42:20.792523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.977 [2024-07-10 23:42:20.792533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.977 [2024-07-10 23:42:20.792725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.977 [2024-07-10 23:42:20.792916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.977 [2024-07-10 23:42:20.792927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.977 [2024-07-10 23:42:20.792935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.977 [2024-07-10 23:42:20.795877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.977 [2024-07-10 23:42:20.805146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.977 [2024-07-10 23:42:20.805567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.977 [2024-07-10 23:42:20.805621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.977 [2024-07-10 23:42:20.805652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.977 [2024-07-10 23:42:20.806082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.977 [2024-07-10 23:42:20.806290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.977 [2024-07-10 23:42:20.806302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.978 [2024-07-10 23:42:20.806310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.978 [2024-07-10 23:42:20.809302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.978 [2024-07-10 23:42:20.818333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.978 [2024-07-10 23:42:20.818820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.978 [2024-07-10 23:42:20.818875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.978 [2024-07-10 23:42:20.818905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.978 [2024-07-10 23:42:20.819562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.978 [2024-07-10 23:42:20.820076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.978 [2024-07-10 23:42:20.820087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.978 [2024-07-10 23:42:20.820095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.978 [2024-07-10 23:42:20.823026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.978 [2024-07-10 23:42:20.831617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.978 [2024-07-10 23:42:20.832090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.978 [2024-07-10 23:42:20.832153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.978 [2024-07-10 23:42:20.832209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.978 [2024-07-10 23:42:20.832819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.978 [2024-07-10 23:42:20.833012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.978 [2024-07-10 23:42:20.833023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.978 [2024-07-10 23:42:20.833031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.978 [2024-07-10 23:42:20.836019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.978 [2024-07-10 23:42:20.844897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.978 [2024-07-10 23:42:20.845356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.978 [2024-07-10 23:42:20.845411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.978 [2024-07-10 23:42:20.845442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.978 [2024-07-10 23:42:20.845957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.978 [2024-07-10 23:42:20.846139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.978 [2024-07-10 23:42:20.846150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.978 [2024-07-10 23:42:20.846157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.978 [2024-07-10 23:42:20.849107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.978 [2024-07-10 23:42:20.858148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.978 [2024-07-10 23:42:20.858633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.978 [2024-07-10 23:42:20.858654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.978 [2024-07-10 23:42:20.858663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.978 [2024-07-10 23:42:20.858854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.978 [2024-07-10 23:42:20.859045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.978 [2024-07-10 23:42:20.859056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.978 [2024-07-10 23:42:20.859064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.978 [2024-07-10 23:42:20.861998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.978 [2024-07-10 23:42:20.871305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.978 [2024-07-10 23:42:20.871730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.978 [2024-07-10 23:42:20.871785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.978 [2024-07-10 23:42:20.871816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.978 [2024-07-10 23:42:20.872284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.978 [2024-07-10 23:42:20.872478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.978 [2024-07-10 23:42:20.872494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.978 [2024-07-10 23:42:20.872503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.978 [2024-07-10 23:42:20.875467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.978 [2024-07-10 23:42:20.884532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.978 [2024-07-10 23:42:20.885026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.978 [2024-07-10 23:42:20.885047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.978 [2024-07-10 23:42:20.885057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.978 [2024-07-10 23:42:20.885261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.978 [2024-07-10 23:42:20.885459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.978 [2024-07-10 23:42:20.885470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.978 [2024-07-10 23:42:20.885479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.978 [2024-07-10 23:42:20.888595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.978 [2024-07-10 23:42:20.897759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.978 [2024-07-10 23:42:20.898267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.978 [2024-07-10 23:42:20.898322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.978 [2024-07-10 23:42:20.898353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.978 [2024-07-10 23:42:20.898996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.978 [2024-07-10 23:42:20.899386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.978 [2024-07-10 23:42:20.899398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.978 [2024-07-10 23:42:20.899406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.978 [2024-07-10 23:42:20.902336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.978 [2024-07-10 23:42:20.911025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.978 [2024-07-10 23:42:20.911503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.978 [2024-07-10 23:42:20.911523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.978 [2024-07-10 23:42:20.911533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.978 [2024-07-10 23:42:20.911725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.978 [2024-07-10 23:42:20.911917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.978 [2024-07-10 23:42:20.911928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.978 [2024-07-10 23:42:20.911936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.978 [2024-07-10 23:42:20.914938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.978 [2024-07-10 23:42:20.924370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.978 [2024-07-10 23:42:20.924867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.978 [2024-07-10 23:42:20.924889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.978 [2024-07-10 23:42:20.924898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.978 [2024-07-10 23:42:20.925090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.978 [2024-07-10 23:42:20.925288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.978 [2024-07-10 23:42:20.925300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.978 [2024-07-10 23:42:20.925308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.978 [2024-07-10 23:42:20.928278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.978 [2024-07-10 23:42:20.937652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.978 [2024-07-10 23:42:20.938046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.978 [2024-07-10 23:42:20.938066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.978 [2024-07-10 23:42:20.938075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.978 [2024-07-10 23:42:20.938282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.978 [2024-07-10 23:42:20.938495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.978 [2024-07-10 23:42:20.938506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.978 [2024-07-10 23:42:20.938514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.978 [2024-07-10 23:42:20.941481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.978 [2024-07-10 23:42:20.950843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.978 [2024-07-10 23:42:20.951232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.978 [2024-07-10 23:42:20.951251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.978 [2024-07-10 23:42:20.951261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.978 [2024-07-10 23:42:20.951452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.979 [2024-07-10 23:42:20.951643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.979 [2024-07-10 23:42:20.951653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.979 [2024-07-10 23:42:20.951662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.979 [2024-07-10 23:42:20.954676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.979 [2024-07-10 23:42:20.964102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.979 [2024-07-10 23:42:20.964465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.979 [2024-07-10 23:42:20.964486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.979 [2024-07-10 23:42:20.964499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.979 [2024-07-10 23:42:20.964690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.979 [2024-07-10 23:42:20.964882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.979 [2024-07-10 23:42:20.964893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.979 [2024-07-10 23:42:20.964902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.979 [2024-07-10 23:42:20.968018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.979 [2024-07-10 23:42:20.977561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.979 [2024-07-10 23:42:20.977948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.979 [2024-07-10 23:42:20.977968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.979 [2024-07-10 23:42:20.977978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.979 [2024-07-10 23:42:20.978176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.979 [2024-07-10 23:42:20.978368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.979 [2024-07-10 23:42:20.978379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.979 [2024-07-10 23:42:20.978388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.979 [2024-07-10 23:42:20.981577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.979 [2024-07-10 23:42:20.990853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.979 [2024-07-10 23:42:20.991209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.979 [2024-07-10 23:42:20.991231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.979 [2024-07-10 23:42:20.991241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.979 [2024-07-10 23:42:20.991433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.979 [2024-07-10 23:42:20.991625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.979 [2024-07-10 23:42:20.991636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.979 [2024-07-10 23:42:20.991645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.979 [2024-07-10 23:42:20.994579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.979 [2024-07-10 23:42:21.004229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.979 [2024-07-10 23:42:21.004562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.979 [2024-07-10 23:42:21.004583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.979 [2024-07-10 23:42:21.004592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.979 [2024-07-10 23:42:21.004784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.979 [2024-07-10 23:42:21.004976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.979 [2024-07-10 23:42:21.004990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.979 [2024-07-10 23:42:21.004999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.979 [2024-07-10 23:42:21.007939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.979 [2024-07-10 23:42:21.017511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.979 [2024-07-10 23:42:21.017873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.979 [2024-07-10 23:42:21.017894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.979 [2024-07-10 23:42:21.017903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.979 [2024-07-10 23:42:21.018095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.979 [2024-07-10 23:42:21.018293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.979 [2024-07-10 23:42:21.018310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.979 [2024-07-10 23:42:21.018319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.979 [2024-07-10 23:42:21.021329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:11.979 [2024-07-10 23:42:21.030742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:11.979 [2024-07-10 23:42:21.031167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.979 [2024-07-10 23:42:21.031188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:11.979 [2024-07-10 23:42:21.031198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:11.979 [2024-07-10 23:42:21.031390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:11.979 [2024-07-10 23:42:21.031581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:11.979 [2024-07-10 23:42:21.031592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:11.979 [2024-07-10 23:42:21.031600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:11.979 [2024-07-10 23:42:21.034631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.239 [2024-07-10 23:42:21.044053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.239 [2024-07-10 23:42:21.044512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.239 [2024-07-10 23:42:21.044566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.239 [2024-07-10 23:42:21.044597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.239 [2024-07-10 23:42:21.045254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.239 [2024-07-10 23:42:21.045687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.239 [2024-07-10 23:42:21.045698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.239 [2024-07-10 23:42:21.045707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.239 [2024-07-10 23:42:21.048729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.239 [2024-07-10 23:42:21.057275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.239 [2024-07-10 23:42:21.057681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.239 [2024-07-10 23:42:21.057701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.239 [2024-07-10 23:42:21.057711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.239 [2024-07-10 23:42:21.057903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.239 [2024-07-10 23:42:21.058095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.239 [2024-07-10 23:42:21.058105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.239 [2024-07-10 23:42:21.058114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.239 [2024-07-10 23:42:21.061052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.239 [2024-07-10 23:42:21.070525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.239 [2024-07-10 23:42:21.070958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.239 [2024-07-10 23:42:21.070979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.239 [2024-07-10 23:42:21.070989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.239 [2024-07-10 23:42:21.071186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.239 [2024-07-10 23:42:21.071379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.239 [2024-07-10 23:42:21.071390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.239 [2024-07-10 23:42:21.071399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.239 [2024-07-10 23:42:21.074432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.239 [2024-07-10 23:42:21.083794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.239 [2024-07-10 23:42:21.084219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.239 [2024-07-10 23:42:21.084241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.239 [2024-07-10 23:42:21.084250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.239 [2024-07-10 23:42:21.084443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.239 [2024-07-10 23:42:21.084635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.239 [2024-07-10 23:42:21.084646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.239 [2024-07-10 23:42:21.084654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.239 [2024-07-10 23:42:21.087613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.239 [2024-07-10 23:42:21.096966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.239 [2024-07-10 23:42:21.097391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.239 [2024-07-10 23:42:21.097412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.239 [2024-07-10 23:42:21.097425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.239 [2024-07-10 23:42:21.097617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.239 [2024-07-10 23:42:21.097808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.239 [2024-07-10 23:42:21.097819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.239 [2024-07-10 23:42:21.097827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.239 [2024-07-10 23:42:21.100760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.239 [2024-07-10 23:42:21.110094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.239 [2024-07-10 23:42:21.110450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.239 [2024-07-10 23:42:21.110504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.239 [2024-07-10 23:42:21.110534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.240 [2024-07-10 23:42:21.111193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.240 [2024-07-10 23:42:21.111769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.240 [2024-07-10 23:42:21.111780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.240 [2024-07-10 23:42:21.111788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.240 [2024-07-10 23:42:21.114822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.240 [2024-07-10 23:42:21.123403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.240 [2024-07-10 23:42:21.123824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.240 [2024-07-10 23:42:21.123844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.240 [2024-07-10 23:42:21.123853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.240 [2024-07-10 23:42:21.124045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.240 [2024-07-10 23:42:21.124243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.240 [2024-07-10 23:42:21.124255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.240 [2024-07-10 23:42:21.124263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.240 [2024-07-10 23:42:21.127270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.240 [2024-07-10 23:42:21.136663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.240 [2024-07-10 23:42:21.137126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.240 [2024-07-10 23:42:21.137204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.240 [2024-07-10 23:42:21.137235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.240 [2024-07-10 23:42:21.137808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.240 [2024-07-10 23:42:21.138008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.240 [2024-07-10 23:42:21.138019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.240 [2024-07-10 23:42:21.138028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.240 [2024-07-10 23:42:21.141014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.240 [2024-07-10 23:42:21.149843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.240 [2024-07-10 23:42:21.150310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.240 [2024-07-10 23:42:21.150366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.240 [2024-07-10 23:42:21.150396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.240 [2024-07-10 23:42:21.150986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.240 [2024-07-10 23:42:21.151174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.240 [2024-07-10 23:42:21.151200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.240 [2024-07-10 23:42:21.151209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.240 [2024-07-10 23:42:21.154140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.240 [2024-07-10 23:42:21.163078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.240 [2024-07-10 23:42:21.163469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.240 [2024-07-10 23:42:21.163489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.240 [2024-07-10 23:42:21.163498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.240 [2024-07-10 23:42:21.163690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.240 [2024-07-10 23:42:21.163882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.240 [2024-07-10 23:42:21.163893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.240 [2024-07-10 23:42:21.163902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.240 [2024-07-10 23:42:21.166888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.240 [2024-07-10 23:42:21.176222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.240 [2024-07-10 23:42:21.176626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.240 [2024-07-10 23:42:21.176649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.240 [2024-07-10 23:42:21.176658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.240 [2024-07-10 23:42:21.176850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.240 [2024-07-10 23:42:21.177042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.240 [2024-07-10 23:42:21.177053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.240 [2024-07-10 23:42:21.177062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.240 [2024-07-10 23:42:21.180004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.240 [2024-07-10 23:42:21.189658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.240 [2024-07-10 23:42:21.190053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.240 [2024-07-10 23:42:21.190074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.240 [2024-07-10 23:42:21.190083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.240 [2024-07-10 23:42:21.190288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.240 [2024-07-10 23:42:21.190486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.240 [2024-07-10 23:42:21.190497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.240 [2024-07-10 23:42:21.190506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.240 [2024-07-10 23:42:21.193611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.240 [2024-07-10 23:42:21.203150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.240 [2024-07-10 23:42:21.203632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.240 [2024-07-10 23:42:21.203653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.240 [2024-07-10 23:42:21.203663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.240 [2024-07-10 23:42:21.203862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.240 [2024-07-10 23:42:21.204059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.240 [2024-07-10 23:42:21.204070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.240 [2024-07-10 23:42:21.204079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.240 [2024-07-10 23:42:21.207195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.240 [2024-07-10 23:42:21.216565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.240 [2024-07-10 23:42:21.217052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.240 [2024-07-10 23:42:21.217072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.240 [2024-07-10 23:42:21.217082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.240 [2024-07-10 23:42:21.217285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.240 [2024-07-10 23:42:21.217483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.240 [2024-07-10 23:42:21.217494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.240 [2024-07-10 23:42:21.217503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.240 [2024-07-10 23:42:21.220609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.240 [2024-07-10 23:42:21.229971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.240 [2024-07-10 23:42:21.230365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.240 [2024-07-10 23:42:21.230386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.240 [2024-07-10 23:42:21.230400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.240 [2024-07-10 23:42:21.230602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.240 [2024-07-10 23:42:21.230798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.240 [2024-07-10 23:42:21.230809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.240 [2024-07-10 23:42:21.230817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.240 [2024-07-10 23:42:21.233928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.240 [2024-07-10 23:42:21.243395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.240 [2024-07-10 23:42:21.243799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.240 [2024-07-10 23:42:21.243820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.240 [2024-07-10 23:42:21.243829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.240 [2024-07-10 23:42:21.244027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.240 [2024-07-10 23:42:21.244229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.240 [2024-07-10 23:42:21.244241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.240 [2024-07-10 23:42:21.244250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.240 [2024-07-10 23:42:21.247336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.240 [2024-07-10 23:42:21.256768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.240 [2024-07-10 23:42:21.257280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.241 [2024-07-10 23:42:21.257337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.241 [2024-07-10 23:42:21.257367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.241 [2024-07-10 23:42:21.258010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.241 [2024-07-10 23:42:21.258356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.241 [2024-07-10 23:42:21.258368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.241 [2024-07-10 23:42:21.258376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.241 [2024-07-10 23:42:21.261393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.241 [2024-07-10 23:42:21.270099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.241 [2024-07-10 23:42:21.270547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.241 [2024-07-10 23:42:21.270569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.241 [2024-07-10 23:42:21.270579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.241 [2024-07-10 23:42:21.270775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.241 [2024-07-10 23:42:21.270977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.241 [2024-07-10 23:42:21.270988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.241 [2024-07-10 23:42:21.271000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.241 [2024-07-10 23:42:21.274107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.241 [2024-07-10 23:42:21.283469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.241 [2024-07-10 23:42:21.283946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.241 [2024-07-10 23:42:21.283969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.241 [2024-07-10 23:42:21.283978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.241 [2024-07-10 23:42:21.284181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.241 [2024-07-10 23:42:21.284379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.241 [2024-07-10 23:42:21.284390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.241 [2024-07-10 23:42:21.284398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.241 [2024-07-10 23:42:21.287479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.241 [2024-07-10 23:42:21.296912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.241 [2024-07-10 23:42:21.297314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.241 [2024-07-10 23:42:21.297341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.241 [2024-07-10 23:42:21.297352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.241 [2024-07-10 23:42:21.297550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.241 [2024-07-10 23:42:21.297746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.241 [2024-07-10 23:42:21.297758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.241 [2024-07-10 23:42:21.297767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.241 [2024-07-10 23:42:21.300862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.501 [2024-07-10 23:42:21.310330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.501 [2024-07-10 23:42:21.310821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.501 [2024-07-10 23:42:21.310842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.501 [2024-07-10 23:42:21.310853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.501 [2024-07-10 23:42:21.311051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.501 [2024-07-10 23:42:21.311254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.501 [2024-07-10 23:42:21.311266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.501 [2024-07-10 23:42:21.311275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.501 [2024-07-10 23:42:21.314346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.501 [2024-07-10 23:42:21.323657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.501 [2024-07-10 23:42:21.324110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.501 [2024-07-10 23:42:21.324131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.501 [2024-07-10 23:42:21.324140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.501 [2024-07-10 23:42:21.324340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.501 [2024-07-10 23:42:21.324560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.501 [2024-07-10 23:42:21.324571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.501 [2024-07-10 23:42:21.324580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.501 [2024-07-10 23:42:21.327778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.501 [2024-07-10 23:42:21.336998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.501 [2024-07-10 23:42:21.337506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.501 [2024-07-10 23:42:21.337564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.501 [2024-07-10 23:42:21.337596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.501 [2024-07-10 23:42:21.338143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.501 [2024-07-10 23:42:21.338345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.501 [2024-07-10 23:42:21.338356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.501 [2024-07-10 23:42:21.338366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.501 [2024-07-10 23:42:21.341470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.501 [2024-07-10 23:42:21.350457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.501 [2024-07-10 23:42:21.350957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.501 [2024-07-10 23:42:21.350979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.501 [2024-07-10 23:42:21.350989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.501 [2024-07-10 23:42:21.351192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.501 [2024-07-10 23:42:21.351389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.501 [2024-07-10 23:42:21.351400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.501 [2024-07-10 23:42:21.351409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.501 [2024-07-10 23:42:21.354492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.501 [2024-07-10 23:42:21.363711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.501 [2024-07-10 23:42:21.364211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.501 [2024-07-10 23:42:21.364235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.501 [2024-07-10 23:42:21.364245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.501 [2024-07-10 23:42:21.364438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.501 [2024-07-10 23:42:21.364629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.501 [2024-07-10 23:42:21.364639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.501 [2024-07-10 23:42:21.364648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.501 [2024-07-10 23:42:21.367600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.501 [2024-07-10 23:42:21.376843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.501 [2024-07-10 23:42:21.377315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.501 [2024-07-10 23:42:21.377335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.501 [2024-07-10 23:42:21.377344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.501 [2024-07-10 23:42:21.377525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.501 [2024-07-10 23:42:21.377706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.502 [2024-07-10 23:42:21.377716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.502 [2024-07-10 23:42:21.377724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.502 [2024-07-10 23:42:21.380666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.502 [2024-07-10 23:42:21.390111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.502 [2024-07-10 23:42:21.390593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.502 [2024-07-10 23:42:21.390650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.502 [2024-07-10 23:42:21.390681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.502 [2024-07-10 23:42:21.391323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.502 [2024-07-10 23:42:21.391516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.502 [2024-07-10 23:42:21.391527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.502 [2024-07-10 23:42:21.391536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.502 [2024-07-10 23:42:21.394464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.502 [2024-07-10 23:42:21.403297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.502 [2024-07-10 23:42:21.403809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.502 [2024-07-10 23:42:21.403829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.502 [2024-07-10 23:42:21.403839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.502 [2024-07-10 23:42:21.404030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.502 [2024-07-10 23:42:21.404230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.502 [2024-07-10 23:42:21.404242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.502 [2024-07-10 23:42:21.404251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.502 [2024-07-10 23:42:21.407167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.502 [2024-07-10 23:42:21.416454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.502 [2024-07-10 23:42:21.416907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.502 [2024-07-10 23:42:21.416927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.502 [2024-07-10 23:42:21.416936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.502 [2024-07-10 23:42:21.417117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.502 [2024-07-10 23:42:21.417327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.502 [2024-07-10 23:42:21.417338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.502 [2024-07-10 23:42:21.417347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.502 [2024-07-10 23:42:21.420270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.502 [2024-07-10 23:42:21.429601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.502 [2024-07-10 23:42:21.430102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.502 [2024-07-10 23:42:21.430158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.502 [2024-07-10 23:42:21.430203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.502 [2024-07-10 23:42:21.430692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.502 [2024-07-10 23:42:21.430883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.502 [2024-07-10 23:42:21.430893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.502 [2024-07-10 23:42:21.430902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.502 [2024-07-10 23:42:21.433829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.502 [2024-07-10 23:42:21.442764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.502 [2024-07-10 23:42:21.443219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.502 [2024-07-10 23:42:21.443240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.502 [2024-07-10 23:42:21.443250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.502 [2024-07-10 23:42:21.443446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.502 [2024-07-10 23:42:21.443626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.502 [2024-07-10 23:42:21.443636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.502 [2024-07-10 23:42:21.443645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.502 [2024-07-10 23:42:21.446589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.502 [2024-07-10 23:42:21.455956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.502 [2024-07-10 23:42:21.456443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.502 [2024-07-10 23:42:21.456464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.502 [2024-07-10 23:42:21.456473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.502 [2024-07-10 23:42:21.456654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.502 [2024-07-10 23:42:21.456834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.502 [2024-07-10 23:42:21.456844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.502 [2024-07-10 23:42:21.456852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.502 [2024-07-10 23:42:21.459797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.502 [2024-07-10 23:42:21.469032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.502 [2024-07-10 23:42:21.469535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.502 [2024-07-10 23:42:21.469556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.502 [2024-07-10 23:42:21.469566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.502 [2024-07-10 23:42:21.469763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.502 [2024-07-10 23:42:21.469960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.502 [2024-07-10 23:42:21.469971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.502 [2024-07-10 23:42:21.469979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.502 [2024-07-10 23:42:21.473091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.502 [2024-07-10 23:42:21.482423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.502 [2024-07-10 23:42:21.482912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.502 [2024-07-10 23:42:21.482932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.502 [2024-07-10 23:42:21.482942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.502 [2024-07-10 23:42:21.483140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.502 [2024-07-10 23:42:21.483342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.502 [2024-07-10 23:42:21.483354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.502 [2024-07-10 23:42:21.483362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.502 [2024-07-10 23:42:21.486383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.502 [2024-07-10 23:42:21.495734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.502 [2024-07-10 23:42:21.496243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.502 [2024-07-10 23:42:21.496270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.502 [2024-07-10 23:42:21.496280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.502 [2024-07-10 23:42:21.496484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.502 [2024-07-10 23:42:21.496675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.502 [2024-07-10 23:42:21.496686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.502 [2024-07-10 23:42:21.496694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.502 [2024-07-10 23:42:21.499620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.502 [2024-07-10 23:42:21.508959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.502 [2024-07-10 23:42:21.509455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.502 [2024-07-10 23:42:21.509511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.502 [2024-07-10 23:42:21.509541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.502 [2024-07-10 23:42:21.510198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.502 [2024-07-10 23:42:21.510723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.502 [2024-07-10 23:42:21.510733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.502 [2024-07-10 23:42:21.510741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.502 [2024-07-10 23:42:21.513668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.502 [2024-07-10 23:42:21.522029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.502 [2024-07-10 23:42:21.522521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.502 [2024-07-10 23:42:21.522576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.502 [2024-07-10 23:42:21.522606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.502 [2024-07-10 23:42:21.523262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.503 [2024-07-10 23:42:21.523809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.503 [2024-07-10 23:42:21.523820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.503 [2024-07-10 23:42:21.523828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.503 [2024-07-10 23:42:21.526749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.503 [2024-07-10 23:42:21.535223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.503 [2024-07-10 23:42:21.535712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.503 [2024-07-10 23:42:21.535766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.503 [2024-07-10 23:42:21.535796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.503 [2024-07-10 23:42:21.536454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.503 [2024-07-10 23:42:21.536649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.503 [2024-07-10 23:42:21.536659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.503 [2024-07-10 23:42:21.536668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.503 [2024-07-10 23:42:21.539563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.503 [2024-07-10 23:42:21.548389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.503 [2024-07-10 23:42:21.548758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.503 [2024-07-10 23:42:21.548813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.503 [2024-07-10 23:42:21.548843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.503 [2024-07-10 23:42:21.549411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.503 [2024-07-10 23:42:21.549694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.503 [2024-07-10 23:42:21.549709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.503 [2024-07-10 23:42:21.549721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.503 [2024-07-10 23:42:21.554168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.503 [2024-07-10 23:42:21.562157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.503 [2024-07-10 23:42:21.562599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.503 [2024-07-10 23:42:21.562656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.503 [2024-07-10 23:42:21.562685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.503 [2024-07-10 23:42:21.563347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.503 [2024-07-10 23:42:21.563884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.503 [2024-07-10 23:42:21.563895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.503 [2024-07-10 23:42:21.563904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.503 [2024-07-10 23:42:21.566963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.763 [2024-07-10 23:42:21.575363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.763 [2024-07-10 23:42:21.575810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.763 [2024-07-10 23:42:21.575830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.763 [2024-07-10 23:42:21.575839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.763 [2024-07-10 23:42:21.576020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.763 [2024-07-10 23:42:21.576224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.763 [2024-07-10 23:42:21.576236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.763 [2024-07-10 23:42:21.576248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.763 [2024-07-10 23:42:21.579171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.763 [2024-07-10 23:42:21.588450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.763 [2024-07-10 23:42:21.588939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.763 [2024-07-10 23:42:21.588993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.763 [2024-07-10 23:42:21.589038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.763 [2024-07-10 23:42:21.589700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.763 [2024-07-10 23:42:21.590179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.763 [2024-07-10 23:42:21.590190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.763 [2024-07-10 23:42:21.590199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.763 [2024-07-10 23:42:21.594463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.763 [2024-07-10 23:42:21.602486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.763 [2024-07-10 23:42:21.602994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.763 [2024-07-10 23:42:21.603015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.763 [2024-07-10 23:42:21.603025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.763 [2024-07-10 23:42:21.603221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.763 [2024-07-10 23:42:21.603413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.763 [2024-07-10 23:42:21.603424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.763 [2024-07-10 23:42:21.603432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.763 [2024-07-10 23:42:21.606402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.763 [2024-07-10 23:42:21.615688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.763 [2024-07-10 23:42:21.616104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.763 [2024-07-10 23:42:21.616158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.763 [2024-07-10 23:42:21.616205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.763 [2024-07-10 23:42:21.616702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.763 [2024-07-10 23:42:21.616893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.763 [2024-07-10 23:42:21.616903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.763 [2024-07-10 23:42:21.616912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.763 [2024-07-10 23:42:21.619836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.763 [2024-07-10 23:42:21.628817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.763 [2024-07-10 23:42:21.629311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.763 [2024-07-10 23:42:21.629376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.763 [2024-07-10 23:42:21.629407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.763 [2024-07-10 23:42:21.629940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.763 [2024-07-10 23:42:21.630121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.763 [2024-07-10 23:42:21.630131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.763 [2024-07-10 23:42:21.630139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.763 [2024-07-10 23:42:21.633077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.763 [2024-07-10 23:42:21.642126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.763 [2024-07-10 23:42:21.642629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.763 [2024-07-10 23:42:21.642684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.763 [2024-07-10 23:42:21.642715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.763 [2024-07-10 23:42:21.643237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.763 [2024-07-10 23:42:21.643429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.764 [2024-07-10 23:42:21.643440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.764 [2024-07-10 23:42:21.643448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.764 [2024-07-10 23:42:21.646372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.764 [2024-07-10 23:42:21.655287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.764 [2024-07-10 23:42:21.655756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.764 [2024-07-10 23:42:21.655821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.764 [2024-07-10 23:42:21.655852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.764 [2024-07-10 23:42:21.656508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.764 [2024-07-10 23:42:21.657007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.764 [2024-07-10 23:42:21.657017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.764 [2024-07-10 23:42:21.657025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.764 [2024-07-10 23:42:21.659953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.764 [2024-07-10 23:42:21.668448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.764 [2024-07-10 23:42:21.668924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.764 [2024-07-10 23:42:21.668983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.764 [2024-07-10 23:42:21.669015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.764 [2024-07-10 23:42:21.669673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.764 [2024-07-10 23:42:21.670184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.764 [2024-07-10 23:42:21.670195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.764 [2024-07-10 23:42:21.670204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.764 [2024-07-10 23:42:21.673122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.764 [2024-07-10 23:42:21.681537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.764 [2024-07-10 23:42:21.681966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.764 [2024-07-10 23:42:21.681985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.764 [2024-07-10 23:42:21.681995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.764 [2024-07-10 23:42:21.682182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.764 [2024-07-10 23:42:21.682390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.764 [2024-07-10 23:42:21.682401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.764 [2024-07-10 23:42:21.682409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.764 [2024-07-10 23:42:21.685331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.764 [2024-07-10 23:42:21.694638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.764 [2024-07-10 23:42:21.695119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.764 [2024-07-10 23:42:21.695181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.764 [2024-07-10 23:42:21.695212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.764 [2024-07-10 23:42:21.695846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.764 [2024-07-10 23:42:21.696038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.764 [2024-07-10 23:42:21.696048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.764 [2024-07-10 23:42:21.696057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.764 [2024-07-10 23:42:21.698994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.764 [2024-07-10 23:42:21.707719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.764 [2024-07-10 23:42:21.708199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.764 [2024-07-10 23:42:21.708219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.764 [2024-07-10 23:42:21.708229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.764 [2024-07-10 23:42:21.708421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.764 [2024-07-10 23:42:21.708613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.764 [2024-07-10 23:42:21.708624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.764 [2024-07-10 23:42:21.708636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.764 [2024-07-10 23:42:21.711591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.764 [2024-07-10 23:42:21.720825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.764 [2024-07-10 23:42:21.721238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.764 [2024-07-10 23:42:21.721258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.764 [2024-07-10 23:42:21.721268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.764 [2024-07-10 23:42:21.721466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.764 [2024-07-10 23:42:21.721663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.764 [2024-07-10 23:42:21.721674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.764 [2024-07-10 23:42:21.721683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.764 [2024-07-10 23:42:21.724786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.764 [2024-07-10 23:42:21.734260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.764 [2024-07-10 23:42:21.734747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.764 [2024-07-10 23:42:21.734768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.764 [2024-07-10 23:42:21.734778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.764 [2024-07-10 23:42:21.734969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.764 [2024-07-10 23:42:21.735166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.764 [2024-07-10 23:42:21.735177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.764 [2024-07-10 23:42:21.735186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.764 [2024-07-10 23:42:21.738201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.764 [2024-07-10 23:42:21.747512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.764 [2024-07-10 23:42:21.747995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.764 [2024-07-10 23:42:21.748015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.764 [2024-07-10 23:42:21.748024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.764 [2024-07-10 23:42:21.748222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.764 [2024-07-10 23:42:21.748414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.764 [2024-07-10 23:42:21.748425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.764 [2024-07-10 23:42:21.748433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.764 [2024-07-10 23:42:21.751356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.764 [2024-07-10 23:42:21.760731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.764 [2024-07-10 23:42:21.761239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.764 [2024-07-10 23:42:21.761300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.764 [2024-07-10 23:42:21.761331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.764 [2024-07-10 23:42:21.761659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.764 [2024-07-10 23:42:21.761940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.764 [2024-07-10 23:42:21.761955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.764 [2024-07-10 23:42:21.761967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.764 [2024-07-10 23:42:21.766416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.764 [2024-07-10 23:42:21.774442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.764 [2024-07-10 23:42:21.774834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.764 [2024-07-10 23:42:21.774856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.764 [2024-07-10 23:42:21.774865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.764 [2024-07-10 23:42:21.775051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.764 [2024-07-10 23:42:21.775267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.764 [2024-07-10 23:42:21.775278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.764 [2024-07-10 23:42:21.775287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.764 [2024-07-10 23:42:21.778255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.764 [2024-07-10 23:42:21.787521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.764 [2024-07-10 23:42:21.787980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.764 [2024-07-10 23:42:21.788035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.764 [2024-07-10 23:42:21.788065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.764 [2024-07-10 23:42:21.788524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.764 [2024-07-10 23:42:21.788716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.764 [2024-07-10 23:42:21.788727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.764 [2024-07-10 23:42:21.788735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.764 [2024-07-10 23:42:21.791751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.764 [2024-07-10 23:42:21.800594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.764 [2024-07-10 23:42:21.801063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.764 [2024-07-10 23:42:21.801082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.764 [2024-07-10 23:42:21.801091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.764 [2024-07-10 23:42:21.801304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.764 [2024-07-10 23:42:21.801497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.764 [2024-07-10 23:42:21.801507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.764 [2024-07-10 23:42:21.801515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.765 [2024-07-10 23:42:21.804437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.765 [2024-07-10 23:42:21.813805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.765 [2024-07-10 23:42:21.814264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.765 [2024-07-10 23:42:21.814284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.765 [2024-07-10 23:42:21.814294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.765 [2024-07-10 23:42:21.814476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.765 [2024-07-10 23:42:21.814656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.765 [2024-07-10 23:42:21.814666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.765 [2024-07-10 23:42:21.814674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:12.765 [2024-07-10 23:42:21.817619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:12.765 [2024-07-10 23:42:21.826884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:12.765 [2024-07-10 23:42:21.827324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.765 [2024-07-10 23:42:21.827344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:12.765 [2024-07-10 23:42:21.827354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:12.765 [2024-07-10 23:42:21.827546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:12.765 [2024-07-10 23:42:21.827737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:12.765 [2024-07-10 23:42:21.827747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:12.765 [2024-07-10 23:42:21.827756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.025 [2024-07-10 23:42:21.830820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.025 [2024-07-10 23:42:21.839963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.025 [2024-07-10 23:42:21.840451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-10 23:42:21.840471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.025 [2024-07-10 23:42:21.840481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.025 [2024-07-10 23:42:21.840673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.025 [2024-07-10 23:42:21.840865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.025 [2024-07-10 23:42:21.840876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.025 [2024-07-10 23:42:21.840887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.025 [2024-07-10 23:42:21.843867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.025 [2024-07-10 23:42:21.853110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.025 [2024-07-10 23:42:21.853563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-10 23:42:21.853631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.025 [2024-07-10 23:42:21.853661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.025 [2024-07-10 23:42:21.854290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.025 [2024-07-10 23:42:21.854482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.025 [2024-07-10 23:42:21.854493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.025 [2024-07-10 23:42:21.854501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.025 [2024-07-10 23:42:21.857423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.025 [2024-07-10 23:42:21.866280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.025 [2024-07-10 23:42:21.866765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-10 23:42:21.866820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.025 [2024-07-10 23:42:21.866850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.025 [2024-07-10 23:42:21.867428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.025 [2024-07-10 23:42:21.867619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.025 [2024-07-10 23:42:21.867630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.025 [2024-07-10 23:42:21.867638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.025 [2024-07-10 23:42:21.870560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.025 [2024-07-10 23:42:21.879350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.025 [2024-07-10 23:42:21.879737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-10 23:42:21.879757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.025 [2024-07-10 23:42:21.879767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.025 [2024-07-10 23:42:21.879948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.025 [2024-07-10 23:42:21.880128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.025 [2024-07-10 23:42:21.880138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.025 [2024-07-10 23:42:21.880146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.025 [2024-07-10 23:42:21.883089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.025 [2024-07-10 23:42:21.892568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.025 [2024-07-10 23:42:21.893046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-10 23:42:21.893066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.025 [2024-07-10 23:42:21.893076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.025 [2024-07-10 23:42:21.893274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.025 [2024-07-10 23:42:21.893465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.025 [2024-07-10 23:42:21.893476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.025 [2024-07-10 23:42:21.893485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.025 [2024-07-10 23:42:21.896406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.025 [2024-07-10 23:42:21.905779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.025 [2024-07-10 23:42:21.906258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-10 23:42:21.906279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.025 [2024-07-10 23:42:21.906288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.025 [2024-07-10 23:42:21.906479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.025 [2024-07-10 23:42:21.906670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.025 [2024-07-10 23:42:21.906681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.025 [2024-07-10 23:42:21.906689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.025 [2024-07-10 23:42:21.909639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.025 [2024-07-10 23:42:21.918896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.025 [2024-07-10 23:42:21.919382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-10 23:42:21.919403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.025 [2024-07-10 23:42:21.919412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.025 [2024-07-10 23:42:21.919603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.026 [2024-07-10 23:42:21.919794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.026 [2024-07-10 23:42:21.919805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.026 [2024-07-10 23:42:21.919814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.026 [2024-07-10 23:42:21.922756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2666346 Killed "${NVMF_APP[@]}" "$@" 00:38:13.026 23:42:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:38:13.026 23:42:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:13.026 23:42:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:13.026 23:42:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:13.026 23:42:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:13.026 [2024-07-10 23:42:21.932353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.026 [2024-07-10 23:42:21.932812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-10 23:42:21.932832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.026 [2024-07-10 23:42:21.932842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.026 23:42:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:13.026 23:42:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2667981 00:38:13.026 [2024-07-10 23:42:21.933040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.026 [2024-07-10 23:42:21.933244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.026 [2024-07-10 23:42:21.933256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.026 [2024-07-10 23:42:21.933265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.026 23:42:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2667981 00:38:13.026 23:42:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2667981 ']' 00:38:13.026 23:42:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:13.026 23:42:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:13.026 23:42:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:13.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:13.026 23:42:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:13.026 23:42:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:13.026 [2024-07-10 23:42:21.936376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.026 [2024-07-10 23:42:21.945742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.026 [2024-07-10 23:42:21.946215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-10 23:42:21.946237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.026 [2024-07-10 23:42:21.946247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.026 [2024-07-10 23:42:21.946445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.026 [2024-07-10 23:42:21.946642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.026 [2024-07-10 23:42:21.946653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.026 [2024-07-10 23:42:21.946662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.026 [2024-07-10 23:42:21.949770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.026 [2024-07-10 23:42:21.959144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.026 [2024-07-10 23:42:21.959662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-10 23:42:21.959683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.026 [2024-07-10 23:42:21.959694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.026 [2024-07-10 23:42:21.959896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.026 [2024-07-10 23:42:21.960095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.026 [2024-07-10 23:42:21.960106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.026 [2024-07-10 23:42:21.960121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.026 [2024-07-10 23:42:21.963240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.026 [2024-07-10 23:42:21.972566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.026 [2024-07-10 23:42:21.973067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-10 23:42:21.973088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.026 [2024-07-10 23:42:21.973098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.026 [2024-07-10 23:42:21.973303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.026 [2024-07-10 23:42:21.973504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.026 [2024-07-10 23:42:21.973515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.026 [2024-07-10 23:42:21.973525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.026 [2024-07-10 23:42:21.976648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.026 [2024-07-10 23:42:21.986050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.026 [2024-07-10 23:42:21.986535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-10 23:42:21.986557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.026 [2024-07-10 23:42:21.986567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.026 [2024-07-10 23:42:21.986767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.026 [2024-07-10 23:42:21.986967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.026 [2024-07-10 23:42:21.986978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.026 [2024-07-10 23:42:21.986987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.026 [2024-07-10 23:42:21.990097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.026 [2024-07-10 23:42:21.995574] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:38:13.026 [2024-07-10 23:42:21.995648] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:13.026 [2024-07-10 23:42:21.999458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.026 [2024-07-10 23:42:21.999933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-10 23:42:21.999955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.026 [2024-07-10 23:42:21.999965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.026 [2024-07-10 23:42:22.000171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.026 [2024-07-10 23:42:22.000376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.026 [2024-07-10 23:42:22.000388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.026 [2024-07-10 23:42:22.000398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.026 [2024-07-10 23:42:22.003653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.026 [2024-07-10 23:42:22.012863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.026 [2024-07-10 23:42:22.013349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-10 23:42:22.013371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.026 [2024-07-10 23:42:22.013382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.026 [2024-07-10 23:42:22.013583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.026 [2024-07-10 23:42:22.013784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.026 [2024-07-10 23:42:22.013796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.026 [2024-07-10 23:42:22.013805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.026 [2024-07-10 23:42:22.016935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.026 [2024-07-10 23:42:22.026267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.026 [2024-07-10 23:42:22.026768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-10 23:42:22.026790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.026 [2024-07-10 23:42:22.026801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.026 [2024-07-10 23:42:22.027002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.026 [2024-07-10 23:42:22.027207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.026 [2024-07-10 23:42:22.027219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.026 [2024-07-10 23:42:22.027230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.026 [2024-07-10 23:42:22.030340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.026 [2024-07-10 23:42:22.039664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.026 [2024-07-10 23:42:22.040151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-10 23:42:22.040178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.026 [2024-07-10 23:42:22.040189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.026 [2024-07-10 23:42:22.040391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.026 [2024-07-10 23:42:22.040592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.026 [2024-07-10 23:42:22.040603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.027 [2024-07-10 23:42:22.040616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.027 [2024-07-10 23:42:22.043697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.027 [2024-07-10 23:42:22.053008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.027 [2024-07-10 23:42:22.053515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-10 23:42:22.053537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.027 [2024-07-10 23:42:22.053548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.027 [2024-07-10 23:42:22.053749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.027 [2024-07-10 23:42:22.053949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.027 [2024-07-10 23:42:22.053960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.027 [2024-07-10 23:42:22.053970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.027 [2024-07-10 23:42:22.057067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.027 EAL: No free 2048 kB hugepages reported on node 1 00:38:13.027 [2024-07-10 23:42:22.066476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.027 [2024-07-10 23:42:22.066949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-10 23:42:22.066969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.027 [2024-07-10 23:42:22.066980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.027 [2024-07-10 23:42:22.067186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.027 [2024-07-10 23:42:22.067388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.027 [2024-07-10 23:42:22.067399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.027 [2024-07-10 23:42:22.067409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.027 [2024-07-10 23:42:22.070536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.027 [2024-07-10 23:42:22.079963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.027 [2024-07-10 23:42:22.080436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-10 23:42:22.080476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.027 [2024-07-10 23:42:22.080487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.027 [2024-07-10 23:42:22.080688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.027 [2024-07-10 23:42:22.080889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.027 [2024-07-10 23:42:22.080900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.027 [2024-07-10 23:42:22.080909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.027 [2024-07-10 23:42:22.084027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.288 [2024-07-10 23:42:22.093460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.288 [2024-07-10 23:42:22.093956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.288 [2024-07-10 23:42:22.093978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.288 [2024-07-10 23:42:22.093988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.288 [2024-07-10 23:42:22.094194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.288 [2024-07-10 23:42:22.094395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.288 [2024-07-10 23:42:22.094406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.288 [2024-07-10 23:42:22.094416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.288 [2024-07-10 23:42:22.097544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.288 [2024-07-10 23:42:22.106843] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:13.288 [2024-07-10 23:42:22.106876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.288 [2024-07-10 23:42:22.107368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.288 [2024-07-10 23:42:22.107390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.288 [2024-07-10 23:42:22.107400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.288 [2024-07-10 23:42:22.107601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.288 [2024-07-10 23:42:22.107802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.288 [2024-07-10 23:42:22.107813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.288 [2024-07-10 23:42:22.107822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.288 [2024-07-10 23:42:22.110951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.288 [2024-07-10 23:42:22.120390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.288 [2024-07-10 23:42:22.120919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.288 [2024-07-10 23:42:22.120942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.288 [2024-07-10 23:42:22.120953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.288 [2024-07-10 23:42:22.121156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.288 [2024-07-10 23:42:22.121362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.288 [2024-07-10 23:42:22.121374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.288 [2024-07-10 23:42:22.121384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.288 [2024-07-10 23:42:22.124507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.288 [2024-07-10 23:42:22.133837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.288 [2024-07-10 23:42:22.134314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.288 [2024-07-10 23:42:22.134336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.288 [2024-07-10 23:42:22.134347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.288 [2024-07-10 23:42:22.134553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.288 [2024-07-10 23:42:22.134754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.288 [2024-07-10 23:42:22.134765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.288 [2024-07-10 23:42:22.134774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.288 [2024-07-10 23:42:22.137876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.288 [2024-07-10 23:42:22.147127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.288 [2024-07-10 23:42:22.147599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.288 [2024-07-10 23:42:22.147620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.288 [2024-07-10 23:42:22.147631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.288 [2024-07-10 23:42:22.147827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.288 [2024-07-10 23:42:22.148021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.288 [2024-07-10 23:42:22.148032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.288 [2024-07-10 23:42:22.148041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.288 [2024-07-10 23:42:22.151133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.288 [2024-07-10 23:42:22.160498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.288 [2024-07-10 23:42:22.160944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.288 [2024-07-10 23:42:22.160965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.288 [2024-07-10 23:42:22.160975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.288 [2024-07-10 23:42:22.161175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.288 [2024-07-10 23:42:22.161391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.288 [2024-07-10 23:42:22.161402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.288 [2024-07-10 23:42:22.161411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.288 [2024-07-10 23:42:22.164480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.288 [2024-07-10 23:42:22.173819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.288 [2024-07-10 23:42:22.174286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.288 [2024-07-10 23:42:22.174306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.288 [2024-07-10 23:42:22.174316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.288 [2024-07-10 23:42:22.174511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.288 [2024-07-10 23:42:22.174705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.288 [2024-07-10 23:42:22.174719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.288 [2024-07-10 23:42:22.174728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.288 [2024-07-10 23:42:22.177805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.288 [2024-07-10 23:42:22.187091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.288 [2024-07-10 23:42:22.187559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.288 [2024-07-10 23:42:22.187580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.288 [2024-07-10 23:42:22.187590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.288 [2024-07-10 23:42:22.187784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.288 [2024-07-10 23:42:22.187979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.288 [2024-07-10 23:42:22.187990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.288 [2024-07-10 23:42:22.187999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.288 [2024-07-10 23:42:22.191090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.288 [2024-07-10 23:42:22.200418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.288 [2024-07-10 23:42:22.200897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.288 [2024-07-10 23:42:22.200918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.288 [2024-07-10 23:42:22.200928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.288 [2024-07-10 23:42:22.201121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.288 [2024-07-10 23:42:22.201340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.288 [2024-07-10 23:42:22.201352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.288 [2024-07-10 23:42:22.201361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.288 [2024-07-10 23:42:22.204422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.288 [2024-07-10 23:42:22.213742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.288 [2024-07-10 23:42:22.214234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.288 [2024-07-10 23:42:22.214255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.288 [2024-07-10 23:42:22.214265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.288 [2024-07-10 23:42:22.214473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.288 [2024-07-10 23:42:22.214667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.288 [2024-07-10 23:42:22.214677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.288 [2024-07-10 23:42:22.214686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.288 [2024-07-10 23:42:22.217714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.288 [2024-07-10 23:42:22.227053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.289 [2024-07-10 23:42:22.227576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.289 [2024-07-10 23:42:22.227598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.289 [2024-07-10 23:42:22.227609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.289 [2024-07-10 23:42:22.227812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.289 [2024-07-10 23:42:22.228012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.289 [2024-07-10 23:42:22.228024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.289 [2024-07-10 23:42:22.228033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.289 [2024-07-10 23:42:22.231158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.289 [2024-07-10 23:42:22.240605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.289 [2024-07-10 23:42:22.241072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.289 [2024-07-10 23:42:22.241094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.289 [2024-07-10 23:42:22.241105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.289 [2024-07-10 23:42:22.241310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.289 [2024-07-10 23:42:22.241511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.289 [2024-07-10 23:42:22.241522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.289 [2024-07-10 23:42:22.241531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.289 [2024-07-10 23:42:22.244609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.289 [2024-07-10 23:42:22.254072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.289 [2024-07-10 23:42:22.254525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.289 [2024-07-10 23:42:22.254546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.289 [2024-07-10 23:42:22.254556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.289 [2024-07-10 23:42:22.254756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.289 [2024-07-10 23:42:22.254958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.289 [2024-07-10 23:42:22.254969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.289 [2024-07-10 23:42:22.254978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.289 [2024-07-10 23:42:22.258060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.289 [2024-07-10 23:42:22.267423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.289 [2024-07-10 23:42:22.267912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.289 [2024-07-10 23:42:22.267935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.289 [2024-07-10 23:42:22.267948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.289 [2024-07-10 23:42:22.268143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.289 [2024-07-10 23:42:22.268364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.289 [2024-07-10 23:42:22.268376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.289 [2024-07-10 23:42:22.268386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.289 [2024-07-10 23:42:22.271454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.289 [2024-07-10 23:42:22.280799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.289 [2024-07-10 23:42:22.281317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.289 [2024-07-10 23:42:22.281339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.289 [2024-07-10 23:42:22.281349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.289 [2024-07-10 23:42:22.281542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.289 [2024-07-10 23:42:22.281735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.289 [2024-07-10 23:42:22.281746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.289 [2024-07-10 23:42:22.281756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.289 [2024-07-10 23:42:22.284788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.289 [2024-07-10 23:42:22.294083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.289 [2024-07-10 23:42:22.294501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.289 [2024-07-10 23:42:22.294521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.289 [2024-07-10 23:42:22.294532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.289 [2024-07-10 23:42:22.294725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.289 [2024-07-10 23:42:22.294920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.289 [2024-07-10 23:42:22.294931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.289 [2024-07-10 23:42:22.294939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.289 [2024-07-10 23:42:22.298044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.289 [2024-07-10 23:42:22.307516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.289 [2024-07-10 23:42:22.307921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.289 [2024-07-10 23:42:22.307941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.289 [2024-07-10 23:42:22.307951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.289 [2024-07-10 23:42:22.308143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.289 [2024-07-10 23:42:22.308362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.289 [2024-07-10 23:42:22.308377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.289 [2024-07-10 23:42:22.308386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.289 [2024-07-10 23:42:22.311451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.289 [2024-07-10 23:42:22.320920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.289 [2024-07-10 23:42:22.321382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.289 [2024-07-10 23:42:22.321405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.289 [2024-07-10 23:42:22.321415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.289 [2024-07-10 23:42:22.321614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.289 [2024-07-10 23:42:22.321814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.289 [2024-07-10 23:42:22.321825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.289 [2024-07-10 23:42:22.321834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.289 [2024-07-10 23:42:22.324885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.289 [2024-07-10 23:42:22.333382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:13.289 [2024-07-10 23:42:22.333413] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:13.289 [2024-07-10 23:42:22.333427] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:13.289 [2024-07-10 23:42:22.333436] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:13.289 [2024-07-10 23:42:22.333446] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:13.289 [2024-07-10 23:42:22.333506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:13.289 [2024-07-10 23:42:22.333691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:13.289 [2024-07-10 23:42:22.333698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:13.289 [2024-07-10 23:42:22.334289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.289 [2024-07-10 23:42:22.334698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.289 [2024-07-10 23:42:22.334719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.289 [2024-07-10 23:42:22.334730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.289 [2024-07-10 23:42:22.334930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.289 [2024-07-10 23:42:22.335131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.289 [2024-07-10 23:42:22.335142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.289 [2024-07-10 23:42:22.335151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.289 [2024-07-10 23:42:22.338565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.289 [2024-07-10 23:42:22.347749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.289 [2024-07-10 23:42:22.348248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.289 [2024-07-10 23:42:22.348280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.289 [2024-07-10 23:42:22.348296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.289 [2024-07-10 23:42:22.348500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.289 [2024-07-10 23:42:22.348703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.289 [2024-07-10 23:42:22.348714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.289 [2024-07-10 23:42:22.348724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.289 [2024-07-10 23:42:22.351861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.550 [2024-07-10 23:42:22.361305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.550 [2024-07-10 23:42:22.361779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.550 [2024-07-10 23:42:22.361801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.550 [2024-07-10 23:42:22.361812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.550 [2024-07-10 23:42:22.362014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.550 [2024-07-10 23:42:22.362222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.550 [2024-07-10 23:42:22.362235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.550 [2024-07-10 23:42:22.362245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.550 [2024-07-10 23:42:22.365371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.550 [2024-07-10 23:42:22.374783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.550 [2024-07-10 23:42:22.375193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.550 [2024-07-10 23:42:22.375216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.550 [2024-07-10 23:42:22.375227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.550 [2024-07-10 23:42:22.375430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.550 [2024-07-10 23:42:22.375630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.550 [2024-07-10 23:42:22.375642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.550 [2024-07-10 23:42:22.375651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.550 [2024-07-10 23:42:22.378776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.550 [2024-07-10 23:42:22.388193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.550 [2024-07-10 23:42:22.388543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.550 [2024-07-10 23:42:22.388563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.550 [2024-07-10 23:42:22.388574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.550 [2024-07-10 23:42:22.388774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.550 [2024-07-10 23:42:22.388979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.550 [2024-07-10 23:42:22.388990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.550 [2024-07-10 23:42:22.388999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.550 [2024-07-10 23:42:22.392118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.550 [2024-07-10 23:42:22.401707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.550 [2024-07-10 23:42:22.402178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.550 [2024-07-10 23:42:22.402200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.550 [2024-07-10 23:42:22.402211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.550 [2024-07-10 23:42:22.402412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.550 [2024-07-10 23:42:22.402613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.550 [2024-07-10 23:42:22.402624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.550 [2024-07-10 23:42:22.402633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.550 [2024-07-10 23:42:22.405754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.550 [2024-07-10 23:42:22.415136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.550 [2024-07-10 23:42:22.415511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.550 [2024-07-10 23:42:22.415532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.550 [2024-07-10 23:42:22.415543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.550 [2024-07-10 23:42:22.415742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.550 [2024-07-10 23:42:22.415942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.550 [2024-07-10 23:42:22.415953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.550 [2024-07-10 23:42:22.415962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.550 [2024-07-10 23:42:22.419086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.550 [2024-07-10 23:42:22.428706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.550 [2024-07-10 23:42:22.429156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.550 [2024-07-10 23:42:22.429185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.550 [2024-07-10 23:42:22.429197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.550 [2024-07-10 23:42:22.429400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.550 [2024-07-10 23:42:22.429603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.550 [2024-07-10 23:42:22.429615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.550 [2024-07-10 23:42:22.429625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.550 [2024-07-10 23:42:22.432771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.550 [2024-07-10 23:42:22.442226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.550 [2024-07-10 23:42:22.442593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.550 [2024-07-10 23:42:22.442616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.550 [2024-07-10 23:42:22.442627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.550 [2024-07-10 23:42:22.442829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.550 [2024-07-10 23:42:22.443030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.550 [2024-07-10 23:42:22.443042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.550 [2024-07-10 23:42:22.443051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.550 [2024-07-10 23:42:22.446191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.550 [2024-07-10 23:42:22.455610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.550 [2024-07-10 23:42:22.456106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.550 [2024-07-10 23:42:22.456131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.550 [2024-07-10 23:42:22.456142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.550 [2024-07-10 23:42:22.456350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.550 [2024-07-10 23:42:22.456551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.550 [2024-07-10 23:42:22.456563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.550 [2024-07-10 23:42:22.456573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.550 [2024-07-10 23:42:22.459709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.550 [2024-07-10 23:42:22.469142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.550 [2024-07-10 23:42:22.469500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.550 [2024-07-10 23:42:22.469521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.550 [2024-07-10 23:42:22.469531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.550 [2024-07-10 23:42:22.469731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.550 [2024-07-10 23:42:22.469932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.550 [2024-07-10 23:42:22.469944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.550 [2024-07-10 23:42:22.469953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.550 [2024-07-10 23:42:22.473080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.550 [2024-07-10 23:42:22.482682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.550 [2024-07-10 23:42:22.483157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.550 [2024-07-10 23:42:22.483184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.550 [2024-07-10 23:42:22.483201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.550 [2024-07-10 23:42:22.483403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.550 [2024-07-10 23:42:22.483602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.550 [2024-07-10 23:42:22.483614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.550 [2024-07-10 23:42:22.483623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.550 [2024-07-10 23:42:22.486743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.550 [2024-07-10 23:42:22.496144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.550 [2024-07-10 23:42:22.496602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.550 [2024-07-10 23:42:22.496622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.550 [2024-07-10 23:42:22.496633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.550 [2024-07-10 23:42:22.496832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.550 [2024-07-10 23:42:22.497032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.550 [2024-07-10 23:42:22.497043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.550 [2024-07-10 23:42:22.497053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.550 [2024-07-10 23:42:22.500172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.550 [2024-07-10 23:42:22.509568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.550 [2024-07-10 23:42:22.510062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.550 [2024-07-10 23:42:22.510084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.550 [2024-07-10 23:42:22.510094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.550 [2024-07-10 23:42:22.510298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.550 [2024-07-10 23:42:22.510497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.550 [2024-07-10 23:42:22.510509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.550 [2024-07-10 23:42:22.510518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.550 [2024-07-10 23:42:22.513634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.550 [2024-07-10 23:42:22.523028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.550 [2024-07-10 23:42:22.523509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.550 [2024-07-10 23:42:22.523530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.550 [2024-07-10 23:42:22.523540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.550 [2024-07-10 23:42:22.523739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.550 [2024-07-10 23:42:22.523943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.550 [2024-07-10 23:42:22.523954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.550 [2024-07-10 23:42:22.523963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.550 [2024-07-10 23:42:22.527076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.550 [2024-07-10 23:42:22.536473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.550 [2024-07-10 23:42:22.536945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.550 [2024-07-10 23:42:22.536966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.550 [2024-07-10 23:42:22.536977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.550 [2024-07-10 23:42:22.537207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.550 [2024-07-10 23:42:22.537407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.550 [2024-07-10 23:42:22.537418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.550 [2024-07-10 23:42:22.537427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.550 [2024-07-10 23:42:22.540539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.550 [2024-07-10 23:42:22.549921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.550 [2024-07-10 23:42:22.550372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.550 [2024-07-10 23:42:22.550393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.550 [2024-07-10 23:42:22.550403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.550 [2024-07-10 23:42:22.550602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.551 [2024-07-10 23:42:22.550801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.551 [2024-07-10 23:42:22.550812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.551 [2024-07-10 23:42:22.550821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.551 [2024-07-10 23:42:22.553935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.551 [2024-07-10 23:42:22.563305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.551 [2024-07-10 23:42:22.563768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.551 [2024-07-10 23:42:22.563790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.551 [2024-07-10 23:42:22.563800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.551 [2024-07-10 23:42:22.563998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.551 [2024-07-10 23:42:22.564203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.551 [2024-07-10 23:42:22.564215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.551 [2024-07-10 23:42:22.564223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.551 [2024-07-10 23:42:22.567340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.551 [2024-07-10 23:42:22.576719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.551 [2024-07-10 23:42:22.577182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.551 [2024-07-10 23:42:22.577204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.551 [2024-07-10 23:42:22.577214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.551 [2024-07-10 23:42:22.577414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.551 [2024-07-10 23:42:22.577613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.551 [2024-07-10 23:42:22.577624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.551 [2024-07-10 23:42:22.577633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.551 [2024-07-10 23:42:22.580749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.551 [2024-07-10 23:42:22.590149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.551 [2024-07-10 23:42:22.590546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.551 [2024-07-10 23:42:22.590570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.551 [2024-07-10 23:42:22.590581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.551 [2024-07-10 23:42:22.590784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.551 [2024-07-10 23:42:22.590985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.551 [2024-07-10 23:42:22.590997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.551 [2024-07-10 23:42:22.591006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.551 [2024-07-10 23:42:22.594140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.551 [2024-07-10 23:42:22.603575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.551 [2024-07-10 23:42:22.604025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.551 [2024-07-10 23:42:22.604047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.551 [2024-07-10 23:42:22.604058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.551 [2024-07-10 23:42:22.604265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.551 [2024-07-10 23:42:22.604467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.551 [2024-07-10 23:42:22.604478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.551 [2024-07-10 23:42:22.604487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.551 [2024-07-10 23:42:22.607614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.812 [2024-07-10 23:42:22.617010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.812 [2024-07-10 23:42:22.617447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.812 [2024-07-10 23:42:22.617472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.812 [2024-07-10 23:42:22.617483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.812 [2024-07-10 23:42:22.617683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.812 [2024-07-10 23:42:22.617882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.812 [2024-07-10 23:42:22.617894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.812 [2024-07-10 23:42:22.617903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.812 [2024-07-10 23:42:22.621024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.812 [2024-07-10 23:42:22.630433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.812 [2024-07-10 23:42:22.630881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.812 [2024-07-10 23:42:22.630902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.812 [2024-07-10 23:42:22.630912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.812 [2024-07-10 23:42:22.631111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.812 [2024-07-10 23:42:22.631316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.812 [2024-07-10 23:42:22.631328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.812 [2024-07-10 23:42:22.631337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.812 [2024-07-10 23:42:22.634460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.812 [2024-07-10 23:42:22.643856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.812 [2024-07-10 23:42:22.644325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.812 [2024-07-10 23:42:22.644346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.812 [2024-07-10 23:42:22.644357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.812 [2024-07-10 23:42:22.644557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.812 [2024-07-10 23:42:22.644757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.812 [2024-07-10 23:42:22.644769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.812 [2024-07-10 23:42:22.644778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.812 [2024-07-10 23:42:22.647888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.812 [2024-07-10 23:42:22.657281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.812 [2024-07-10 23:42:22.657699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.812 [2024-07-10 23:42:22.657720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.812 [2024-07-10 23:42:22.657730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.812 [2024-07-10 23:42:22.657928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.812 [2024-07-10 23:42:22.658130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.812 [2024-07-10 23:42:22.658141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.812 [2024-07-10 23:42:22.658150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.812 [2024-07-10 23:42:22.661273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.812 [2024-07-10 23:42:22.670643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.812 [2024-07-10 23:42:22.671100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.812 [2024-07-10 23:42:22.671121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.812 [2024-07-10 23:42:22.671133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.812 [2024-07-10 23:42:22.671336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.813 [2024-07-10 23:42:22.671538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.813 [2024-07-10 23:42:22.671549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.813 [2024-07-10 23:42:22.671558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.813 [2024-07-10 23:42:22.674669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.813 [2024-07-10 23:42:22.684049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.813 [2024-07-10 23:42:22.684459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.813 [2024-07-10 23:42:22.684480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.813 [2024-07-10 23:42:22.684490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.813 [2024-07-10 23:42:22.684689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.813 [2024-07-10 23:42:22.684889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.813 [2024-07-10 23:42:22.684901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.813 [2024-07-10 23:42:22.684910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.813 [2024-07-10 23:42:22.688024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.813 [2024-07-10 23:42:22.697405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.813 [2024-07-10 23:42:22.697754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.813 [2024-07-10 23:42:22.697775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.813 [2024-07-10 23:42:22.697786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.813 [2024-07-10 23:42:22.697984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.813 [2024-07-10 23:42:22.698190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.813 [2024-07-10 23:42:22.698202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.813 [2024-07-10 23:42:22.698211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.813 [2024-07-10 23:42:22.701337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.813 [2024-07-10 23:42:22.710897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.813 [2024-07-10 23:42:22.711346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.813 [2024-07-10 23:42:22.711369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.813 [2024-07-10 23:42:22.711379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.813 [2024-07-10 23:42:22.711579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.813 [2024-07-10 23:42:22.711779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.813 [2024-07-10 23:42:22.711791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.813 [2024-07-10 23:42:22.711800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.813 [2024-07-10 23:42:22.714916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.813 [2024-07-10 23:42:22.724310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.813 [2024-07-10 23:42:22.724664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.813 [2024-07-10 23:42:22.724686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.813 [2024-07-10 23:42:22.724696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.813 [2024-07-10 23:42:22.724895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.813 [2024-07-10 23:42:22.725094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.813 [2024-07-10 23:42:22.725115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.813 [2024-07-10 23:42:22.725124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.813 [2024-07-10 23:42:22.728247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.813 [2024-07-10 23:42:22.737813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.813 [2024-07-10 23:42:22.738175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.813 [2024-07-10 23:42:22.738197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.813 [2024-07-10 23:42:22.738209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.813 [2024-07-10 23:42:22.738408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.813 [2024-07-10 23:42:22.738607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.813 [2024-07-10 23:42:22.738619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.813 [2024-07-10 23:42:22.738628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.813 [2024-07-10 23:42:22.741747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.813 [2024-07-10 23:42:22.751314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.813 [2024-07-10 23:42:22.751789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.813 [2024-07-10 23:42:22.751815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.813 [2024-07-10 23:42:22.751826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.813 [2024-07-10 23:42:22.752024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.813 [2024-07-10 23:42:22.752227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.813 [2024-07-10 23:42:22.752240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.813 [2024-07-10 23:42:22.752249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.813 [2024-07-10 23:42:22.755363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.813 [2024-07-10 23:42:22.764729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.813 [2024-07-10 23:42:22.765206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.813 [2024-07-10 23:42:22.765227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.813 [2024-07-10 23:42:22.765239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.813 [2024-07-10 23:42:22.765439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.813 [2024-07-10 23:42:22.765639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.813 [2024-07-10 23:42:22.765651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.813 [2024-07-10 23:42:22.765660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.813 [2024-07-10 23:42:22.768774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.813 [2024-07-10 23:42:22.778150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.813 [2024-07-10 23:42:22.778661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.813 [2024-07-10 23:42:22.778682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.813 [2024-07-10 23:42:22.778693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.813 [2024-07-10 23:42:22.778890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.813 [2024-07-10 23:42:22.779088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.813 [2024-07-10 23:42:22.779100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.813 [2024-07-10 23:42:22.779109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.813 [2024-07-10 23:42:22.782223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.813 [2024-07-10 23:42:22.791586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.813 [2024-07-10 23:42:22.792084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.813 [2024-07-10 23:42:22.792105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.813 [2024-07-10 23:42:22.792116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.813 [2024-07-10 23:42:22.792319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.813 [2024-07-10 23:42:22.792522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.813 [2024-07-10 23:42:22.792533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.813 [2024-07-10 23:42:22.792542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.813 [2024-07-10 23:42:22.795651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.813 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:13.813 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:38:13.813 23:42:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:13.813 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:13.813 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:13.813 [2024-07-10 23:42:22.805013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.813 [2024-07-10 23:42:22.805492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.813 [2024-07-10 23:42:22.805514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.813 [2024-07-10 23:42:22.805524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.813 [2024-07-10 23:42:22.805724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.813 [2024-07-10 23:42:22.805923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.813 [2024-07-10 23:42:22.805934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.813 [2024-07-10 23:42:22.805943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.814 [2024-07-10 23:42:22.809056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.814 [2024-07-10 23:42:22.818438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.814 [2024-07-10 23:42:22.818907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.814 [2024-07-10 23:42:22.818928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.814 [2024-07-10 23:42:22.818939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.814 [2024-07-10 23:42:22.819136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.814 [2024-07-10 23:42:22.819341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.814 [2024-07-10 23:42:22.819353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.814 [2024-07-10 23:42:22.819362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.814 [2024-07-10 23:42:22.822472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.814 [2024-07-10 23:42:22.831844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.814 [2024-07-10 23:42:22.832291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.814 [2024-07-10 23:42:22.832313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.814 [2024-07-10 23:42:22.832324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.814 [2024-07-10 23:42:22.832524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.814 [2024-07-10 23:42:22.832728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.814 [2024-07-10 23:42:22.832739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.814 [2024-07-10 23:42:22.832748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.814 [2024-07-10 23:42:22.835860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.814 23:42:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:13.814 23:42:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:13.814 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:13.814 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:13.814 [2024-07-10 23:42:22.844045] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:13.814 [2024-07-10 23:42:22.845238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.814 [2024-07-10 23:42:22.845660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.814 [2024-07-10 23:42:22.845681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.814 [2024-07-10 23:42:22.845691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.814 [2024-07-10 23:42:22.845889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.814 [2024-07-10 23:42:22.846088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.814 [2024-07-10 23:42:22.846099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.814 [2024-07-10 23:42:22.846108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.814 [2024-07-10 23:42:22.849220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.814 [2024-07-10 23:42:22.858766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.814 [2024-07-10 23:42:22.859169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.814 [2024-07-10 23:42:22.859191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.814 [2024-07-10 23:42:22.859201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.814 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:13.814 [2024-07-10 23:42:22.859399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.814 [2024-07-10 23:42:22.859597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.814 [2024-07-10 23:42:22.859608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.814 [2024-07-10 23:42:22.859617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.814 23:42:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:13.814 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:13.814 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:13.814 [2024-07-10 23:42:22.862726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.814 [2024-07-10 23:42:22.872278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.814 [2024-07-10 23:42:22.872811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.814 [2024-07-10 23:42:22.872833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:13.814 [2024-07-10 23:42:22.872843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:13.814 [2024-07-10 23:42:22.873046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:13.814 [2024-07-10 23:42:22.873254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.814 [2024-07-10 23:42:22.873267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.814 [2024-07-10 23:42:22.873277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.814 [2024-07-10 23:42:22.876407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:14.074 [2024-07-10 23:42:22.885653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:14.074 [2024-07-10 23:42:22.886169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:14.074 [2024-07-10 23:42:22.886191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:14.074 [2024-07-10 23:42:22.886202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:14.074 [2024-07-10 23:42:22.886403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:14.074 [2024-07-10 23:42:22.886605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:14.074 [2024-07-10 23:42:22.886617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:14.074 [2024-07-10 23:42:22.886626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:14.074 [2024-07-10 23:42:22.889756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:14.074 [2024-07-10 23:42:22.899168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:14.074 [2024-07-10 23:42:22.899645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:14.074 [2024-07-10 23:42:22.899666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:14.074 [2024-07-10 23:42:22.899676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:14.074 [2024-07-10 23:42:22.899876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:14.074 [2024-07-10 23:42:22.900077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:14.074 [2024-07-10 23:42:22.900088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:14.074 [2024-07-10 23:42:22.900097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:14.074 [2024-07-10 23:42:22.903218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:14.074 [2024-07-10 23:42:22.912602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:14.074 [2024-07-10 23:42:22.913110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:14.074 [2024-07-10 23:42:22.913131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:14.074 [2024-07-10 23:42:22.913142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:14.074 [2024-07-10 23:42:22.913351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:14.074 [2024-07-10 23:42:22.913552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:14.074 [2024-07-10 23:42:22.913563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:14.074 [2024-07-10 23:42:22.913572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:14.074 [2024-07-10 23:42:22.916710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:14.074 [2024-07-10 23:42:22.926103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:14.074 [2024-07-10 23:42:22.926498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:14.074 [2024-07-10 23:42:22.926519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:14.075 [2024-07-10 23:42:22.926529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:14.075 [2024-07-10 23:42:22.926729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:14.075 [2024-07-10 23:42:22.926927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:14.075 [2024-07-10 23:42:22.926939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:14.075 [2024-07-10 23:42:22.926948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:14.075 [2024-07-10 23:42:22.930064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:14.075 [2024-07-10 23:42:22.939622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:14.075 [2024-07-10 23:42:22.940013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:14.075 [2024-07-10 23:42:22.940034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:14.075 [2024-07-10 23:42:22.940045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:14.075 [2024-07-10 23:42:22.940251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:14.075 [2024-07-10 23:42:22.940451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:14.075 [2024-07-10 23:42:22.940462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:14.075 [2024-07-10 23:42:22.940471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:14.075 [2024-07-10 23:42:22.943586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:14.075 Malloc0 00:38:14.075 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:14.075 23:42:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:14.075 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:14.075 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:14.075 [2024-07-10 23:42:22.953152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:14.075 [2024-07-10 23:42:22.953655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:14.075 [2024-07-10 23:42:22.953676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:14.075 [2024-07-10 23:42:22.953686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:14.075 [2024-07-10 23:42:22.953888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:14.075 [2024-07-10 23:42:22.954087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:14.075 [2024-07-10 23:42:22.954098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:14.075 [2024-07-10 23:42:22.954107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:14.075 [2024-07-10 23:42:22.957222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:14.075 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:14.075 23:42:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:14.075 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:14.075 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:14.075 [2024-07-10 23:42:22.966603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:14.075 [2024-07-10 23:42:22.967098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:14.075 [2024-07-10 23:42:22.967118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:14.075 [2024-07-10 23:42:22.967128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:14.075 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:14.075 [2024-07-10 23:42:22.967333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:14.075 [2024-07-10 23:42:22.967533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:14.075 [2024-07-10 23:42:22.967543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:14.075 [2024-07-10 23:42:22.967551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:14.075 23:42:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:14.075 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:14.075 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:14.075 [2024-07-10 23:42:22.970662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:14.075 [2024-07-10 23:42:22.974665] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:14.075 23:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:14.075 23:42:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2667057 00:38:14.075 [2024-07-10 23:42:22.980022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:14.075 [2024-07-10 23:42:23.054578] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:24.096 00:38:24.096 Latency(us) 00:38:24.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:24.096 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:24.096 Verification LBA range: start 0x0 length 0x4000 00:38:24.096 Nvme1n1 : 15.01 6789.81 26.52 11725.47 0.00 6891.30 726.59 26898.25 00:38:24.096 =================================================================================================================== 00:38:24.096 Total : 6789.81 26.52 11725.47 0.00 6891.30 726.59 26898.25 00:38:24.096 23:42:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:38:24.096 23:42:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:24.096 23:42:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:24.097 rmmod nvme_tcp 00:38:24.097 rmmod nvme_fabrics 00:38:24.097 rmmod nvme_keyring 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2667981 ']' 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2667981 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2667981 ']' 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2667981 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2667981 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2667981' 00:38:24.097 killing process with pid 2667981 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2667981 00:38:24.097 23:42:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2667981 00:38:26.001 23:42:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:26.001 23:42:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:26.001 23:42:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:26.001 23:42:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:26.001 23:42:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:26.001 23:42:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:26.001 23:42:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:26.001 23:42:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.907 23:42:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:27.907 00:38:27.907 real 0m30.183s 00:38:27.907 user 1m16.108s 00:38:27.907 sys 0m6.373s 00:38:27.907 23:42:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:27.907 23:42:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:27.907 ************************************ 00:38:27.907 END TEST nvmf_bdevperf 00:38:27.907 ************************************ 00:38:27.907 23:42:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:38:27.907 23:42:36 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:27.907 23:42:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:27.907 23:42:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:27.907 23:42:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:27.907 ************************************ 00:38:27.907 START TEST nvmf_target_disconnect 00:38:27.907 ************************************ 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:27.907 * Looking for test storage... 00:38:27.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:27.907 23:42:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.908 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:27.908 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:27.908 23:42:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:38:27.908 23:42:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:33.180 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:33.180 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:33.180 Found net devices under 0000:86:00.0: cvl_0_0 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:33.180 Found net devices under 0000:86:00.1: cvl_0_1 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:33.180 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:33.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:33.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:38:33.181 00:38:33.181 --- 10.0.0.2 ping statistics --- 00:38:33.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:33.181 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:38:33.181 23:42:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:33.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:33.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:38:33.181 00:38:33.181 --- 10.0.0.1 ping statistics --- 00:38:33.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:33.181 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:33.181 ************************************ 00:38:33.181 START TEST nvmf_target_disconnect_tc1 00:38:33.181 ************************************ 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:38:33.181 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:33.181 EAL: No free 2048 kB hugepages reported on node 1 00:38:33.181 [2024-07-10 23:42:42.215604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-07-10 23:42:42.215759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d000 with addr=10.0.0.2, port=4420 00:38:33.181 [2024-07-10 23:42:42.215921] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:38:33.181 [2024-07-10 23:42:42.215966] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:33.181 [2024-07-10 23:42:42.215998] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:38:33.181 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:38:33.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:38:33.181 Initializing NVMe Controllers 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:33.441 00:38:33.441 real 0m0.183s 00:38:33.441 user 0m0.070s 00:38:33.441 sys 0m0.113s 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:38:33.441 ************************************ 00:38:33.441 END TEST nvmf_target_disconnect_tc1 00:38:33.441 ************************************ 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:33.441 ************************************ 00:38:33.441 START TEST nvmf_target_disconnect_tc2 00:38:33.441 ************************************ 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2673371 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2673371 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2673371 ']' 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:33.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:33.441 23:42:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:33.441 [2024-07-10 23:42:42.401866] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:38:33.441 [2024-07-10 23:42:42.401972] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:33.441 EAL: No free 2048 kB hugepages reported on node 1 00:38:33.700 [2024-07-10 23:42:42.523698] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:33.700 [2024-07-10 23:42:42.735153] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:33.700 [2024-07-10 23:42:42.735202] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:33.700 [2024-07-10 23:42:42.735214] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:33.700 [2024-07-10 23:42:42.735238] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:33.700 [2024-07-10 23:42:42.735248] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:33.700 [2024-07-10 23:42:42.735410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:38:33.700 [2024-07-10 23:42:42.735495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:38:33.700 [2024-07-10 23:42:42.735563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:38:33.700 [2024-07-10 23:42:42.735587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:34.269 Malloc0 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:34.269 [2024-07-10 23:42:43.311339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:34.269 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:34.528 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:34.528 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:34.528 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:34.528 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:34.528 [2024-07-10 23:42:43.339592] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:34.528 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:34.528 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:34.528 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:34.528 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:34.528 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:34.528 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2673619 00:38:34.528 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:38:34.528 23:42:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:34.528 EAL: No free 2048 kB hugepages reported on node 1 00:38:36.441 23:42:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2673371 00:38:36.441 23:42:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 [2024-07-10 23:42:45.378673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 [2024-07-10 23:42:45.379048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Read completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.441 starting I/O failed 00:38:36.441 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 [2024-07-10 23:42:45.379421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Write completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 Read completed with error (sct=0, sc=8) 00:38:36.442 starting I/O failed 00:38:36.442 [2024-07-10 23:42:45.379788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:36.442 [2024-07-10 23:42:45.380008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.380032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.380349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.380375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.380664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.380679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.380887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.380901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.381101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.381115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.381251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.381265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.381549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.381591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.381891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.381932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.382218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.382259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.382503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.382541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.382787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.382826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.383050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.383091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.383359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.383375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.383504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.383517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.383693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.383707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.383831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.383844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.384080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.384126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.384305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.384345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.384575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.384615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.384871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.384911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.385132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.385387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.385613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.385653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.385982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.386021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.386260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.386301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.386667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.386706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.442 [2024-07-10 23:42:45.386946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.442 [2024-07-10 23:42:45.386959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.442 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.387080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.387093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.387259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.387274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.387380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.387393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.387624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.387664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.387929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.387969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.388215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.388256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.388537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.388577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.388874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.388914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.389221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.389261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.389502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.389543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.389755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.389795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.390025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.390064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.390374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.390415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.390680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.390720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.390991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.391005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.391244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.391258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.391439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.391452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.391733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.391752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.391929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.391947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.392061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.392079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.392179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.392198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.392454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.392495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.392659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.392698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.392918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.392958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.393265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.393306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.393636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.393676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.393903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.393921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.394186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.394211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.394404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.394422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.394605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.394624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.394802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.394824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.395084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.395102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.395386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.395406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.395681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.395699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.395959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.395978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.396186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.396206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.396445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.396463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.396588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.396606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.443 qpair failed and we were unable to recover it. 00:38:36.443 [2024-07-10 23:42:45.396859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.443 [2024-07-10 23:42:45.396877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.397062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.397079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.397271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.397289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.397565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.397583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.397823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.397841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.398124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.398143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.398293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.398312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.398505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.398523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.398770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.398789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.399005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.399023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.399191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.399211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.399478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.399497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.399753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.399771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.400039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.400057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.400294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.400313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.400560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.400579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.400832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.400851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.401057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.401075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.401258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.401277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.401484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.401524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.401762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.401801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.402126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.402173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.402435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.402475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.402695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.402735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.403037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.403076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.403399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.403420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.403684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.403702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.403967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.403986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.404250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.404269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.404454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.404471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.404675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.404693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.404882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.404901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.405143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.405200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.405451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.405491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.405746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.405786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.406095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.406134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.406443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.406461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.406629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.406646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.406925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.406942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.407156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.407185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.444 [2024-07-10 23:42:45.407371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.444 [2024-07-10 23:42:45.407388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.444 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.407578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.407595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.407778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.407796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.407974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.408014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.408237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.408286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.408536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.408575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.408890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.408930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.409249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.409290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.409594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.409634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.409961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.410004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.410246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.410287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.410598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.410638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.410953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.410992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.411265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.411292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.411514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.411533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.411803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.411821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.412055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.412074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.412288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.412306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.412546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.412564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.412760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.412779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.412947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.412965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.413251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.413292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.413576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.413615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.413869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.413909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.414169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.414210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.414505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.414522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.414782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.414800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.414990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.415009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.415268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.415289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.415527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.415544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.415656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.415675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.415926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.415944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.416180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.416202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.445 [2024-07-10 23:42:45.416474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.445 [2024-07-10 23:42:45.416512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.445 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.416765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.416805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.417091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.417131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.417464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.417504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.417751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.417791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.418096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.418135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.418412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.418430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.418743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.418781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.419073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.419113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.419346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.419390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.419712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.419752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.420031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.420071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.420364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.420406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.420644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.420684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.421007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.421047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.421346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.421387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.421617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.421656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.421947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.421987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.422284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.422326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.422582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.422622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.422852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.422891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.423235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.423279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.423612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.423653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.423885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.423935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.424126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.424144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.424411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.424429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.424626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.424644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.424883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.424901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.425106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.425146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.425480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.425520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.425809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.425849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.426129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.426147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.426343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.426362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.426543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.426561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.426811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.426829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.427026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.427044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.427280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.427301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.427568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.427586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.427696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.427714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.427974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.427996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.428219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.446 [2024-07-10 23:42:45.428265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.446 qpair failed and we were unable to recover it. 00:38:36.446 [2024-07-10 23:42:45.428491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.428532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.428857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.428897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.429110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.429150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.429489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.429580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.429889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.429929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.430247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.430289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.430594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.430634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.430869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.430908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.431208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.431252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.431476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.431517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.431734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.431774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.432077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.432116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.432372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.432391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.432646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.432665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.432857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.432875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.433070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.433088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.433274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.433293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.433505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.433523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.433701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.433719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.433963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.434003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.434315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.434357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.434613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.434653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.434964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.435003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.435254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.435305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.435602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.435620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.435821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.435840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.436108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.436126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.436350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.436369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.436617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.436635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.436881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.436900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.437148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.437171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.437441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.437458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.437746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.437764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.437952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.437970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.438227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.438246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.438490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.438508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.438675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.438693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.438813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.438831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.439027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.439047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.439329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.439349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.439536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.447 [2024-07-10 23:42:45.439554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.447 qpair failed and we were unable to recover it. 00:38:36.447 [2024-07-10 23:42:45.439772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.439790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.440051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.440069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.440331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.440349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.440466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.440484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.440661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.440679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.440865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.440883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.441176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.441217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.441522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.441562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.441796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.441836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.442165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.442183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.442381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.442398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.442662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.442680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.442930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.442948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.443129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.443147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.443425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.443445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.443710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.443728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.443970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.443988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.444279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.444302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.444503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.444521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.444811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.444828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.445088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.445106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.445275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.445293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.445568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.445588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.445828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.445846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.446064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.446083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.446362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.446381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.446639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.446664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.446912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.446929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.447141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.447165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.447363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.447382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.447569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.447587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.447787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.447804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.448095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.448113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.448300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.448319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.448491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.448509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.448773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.448818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.449107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.449146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.449478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.449525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.449832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.449871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.450141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.450163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.450397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.448 [2024-07-10 23:42:45.450416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.448 qpair failed and we were unable to recover it. 00:38:36.448 [2024-07-10 23:42:45.450586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.450604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.450878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.450917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.451207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.451255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.451500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.451519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.451760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.451778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.451983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.452001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.452247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.452289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.452545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.452586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.452900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.452939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.453153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.453202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.453500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.453541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.453870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.453910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.454069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.454109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.454428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.454468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.454758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.454797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.455082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.455122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.455402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.455445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.455657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.455696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.456004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.456044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.456292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.456334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.456620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.456659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.456939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.456979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.457271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.457312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.457681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.457765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.458187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.458273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.458606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.458652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.458898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.458940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.459208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.459222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.459410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.459424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.459681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.459721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.460059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.460100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.460414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.460427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.460704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.460717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.460925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.460939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.461200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.461213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.461345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.461359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.461608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.449 [2024-07-10 23:42:45.461624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.449 qpair failed and we were unable to recover it. 00:38:36.449 [2024-07-10 23:42:45.461838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.461879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.462194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.462236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.462443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.462456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.462694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.462707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.462933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.462947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.463183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.463198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.463475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.463488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.463720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.463733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.463971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.463984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.464260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.464274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.464506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.464519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.464759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.464772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.464944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.464957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.465189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.465231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.465543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.465584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.465887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.465933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.466191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.466205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.466399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.466412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.466660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.466674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.466926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.466939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.467181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.467194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.467450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.467463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.467705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.467719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.467971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.467984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.468247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.468261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.468423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.468437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.468689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.468703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.468971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.469012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.469189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.469229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.469545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.469585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.469817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.469857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.470084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.470097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.470324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.470336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.470597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.470609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.470887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.470900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.471062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.471074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.471275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.471315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.471640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.471679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.471982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.472021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.472237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.450 [2024-07-10 23:42:45.472285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.450 qpair failed and we were unable to recover it. 00:38:36.450 [2024-07-10 23:42:45.472542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.472581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.472846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.472886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.473115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.473156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.473450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.473464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.473711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.473724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.473937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.473951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.474124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.474137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.474399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.474440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.474609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.474649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.474970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.475013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.475323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.475375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.475682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.475695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.475971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.475983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.476335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.476377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.476684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.476725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.476980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.477019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.477258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.477299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.477619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.477659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.477941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.477981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.478214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.478254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.478556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.478596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.478866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.478906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.479197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.479209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.479451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.479465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.479713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.479726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.479955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.479968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.480229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.480242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.480411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.480424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.480601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.480614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.480883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.480896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.481153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.481171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.481302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.481316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.481527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.481540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.481798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.481811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.481937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.481950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.482197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.482210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.482369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.482382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.482607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.482620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.482851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.482864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.483044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.483060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.483260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.451 [2024-07-10 23:42:45.483274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.451 qpair failed and we were unable to recover it. 00:38:36.451 [2024-07-10 23:42:45.483445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.483459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.483703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.483742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.483997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.484036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.484342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.484355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.484609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.484623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.484854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.484867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.485027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.485040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.485217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.485231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.485515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.485556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.485866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.485905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.486237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.486278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.486580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.486620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.486852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.486892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.487130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.487179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.487448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.487461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.487619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.487632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.487903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.487916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.488119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.488167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.488476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.488516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.488741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.488781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.489015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.489055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.489212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.489254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.489480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.489494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.489694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.489707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.489985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.489999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.490228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.490242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.490501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.490514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.490689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.490703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.490934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.490947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.491053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.491066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.491321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.491334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.491594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.491607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.491885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.491898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.492021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.492038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.492304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.492344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.492636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.492676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.492908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.492947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.493270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.493313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.493482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.493526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.493743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.493783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.494131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.452 [2024-07-10 23:42:45.494183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.452 qpair failed and we were unable to recover it. 00:38:36.452 [2024-07-10 23:42:45.494428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.494467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.494694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.494734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.494966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.495005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.495336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.495378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.495644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.495684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.495993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.496032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.496334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.496375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.496641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.496654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.496959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.496998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.497302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.497342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.497643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.497684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.498009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.498050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.498250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.498263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.498495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.498534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.498762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.498801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.499135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.499187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.499496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.499540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.499722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.499735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.499930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.499943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.500180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.500195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.500376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.500389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.453 qpair failed and we were unable to recover it. 00:38:36.453 [2024-07-10 23:42:45.500586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.453 [2024-07-10 23:42:45.500625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.729 qpair failed and we were unable to recover it. 00:38:36.729 [2024-07-10 23:42:45.500939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.729 [2024-07-10 23:42:45.500980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.729 qpair failed and we were unable to recover it. 00:38:36.729 [2024-07-10 23:42:45.501267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.729 [2024-07-10 23:42:45.501280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.729 qpair failed and we were unable to recover it. 00:38:36.729 [2024-07-10 23:42:45.501536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.501575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.501932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.502016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.502473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.502514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.502800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.502815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.503014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.503028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.503255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.503269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.503499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.503512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.503791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.503804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.504047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.504060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.504278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.504292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.504455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.504468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.504713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.504753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.505041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.505080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.505309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.505325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.505588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.505601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.505759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.505810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.506058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.506098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.506388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.506428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.506718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.506758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.507060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.507109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.507290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.507304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.507584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.507623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.507873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.507912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.508142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.508206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.508517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.508557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.508859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.508899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.509156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.509210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.509451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.509490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.509646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.509685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.509943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.509983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.510220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.510234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.510504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.510517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.510692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.510705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.510807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.510820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.511000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.511014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.511190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.511211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.511420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.511460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.511682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.511721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.512061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.512101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.512354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.512394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.512715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.730 [2024-07-10 23:42:45.512765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:36.730 qpair failed and we were unable to recover it. 00:38:36.730 [2024-07-10 23:42:45.512938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.512980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.513212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.513254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.513571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.513612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.513850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.513891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.514121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.514172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.514411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.514429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.514621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.514639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.514843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.514860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.515191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.515206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.515421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.515461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.515625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.515665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.515973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.516012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.516237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.516278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.516561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.516600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.516891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.516930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.517182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.517223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.517502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.517541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.517790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.517829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.518093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.518134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.518470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.518510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.518742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.518781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.519070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.519109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.519414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.519455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.519718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.519731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.520007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.520020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.520183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.520196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.520382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.520422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.520720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.520759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.520989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.521028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.521257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.521270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.521523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.521536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.521708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.521720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.521954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.521993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.522285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.522325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.522600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.522613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.522840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.522853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.523053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.523065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.523269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.523310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.523506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.523546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.523712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.523756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.731 qpair failed and we were unable to recover it. 00:38:36.731 [2024-07-10 23:42:45.524054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.731 [2024-07-10 23:42:45.524093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.524366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.524409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.524646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.524685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.524964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.525004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.525186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.525228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.525529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.525568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.525860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.525899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.526125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.526174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.526505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.526544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.526763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.526776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.527027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.527039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.527266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.527279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.527545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.527558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.527731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.527744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.527999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.528012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.528191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.528204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.528382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.528395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.528587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.528627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.528927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.528966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.529307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.529348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.529565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.529604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.529822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.529862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.530176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.530217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.530388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.530427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.530631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.530643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.530833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.530846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.531105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.531118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.531303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.531324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.531531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.531569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.531878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.531917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.532226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.532271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.532526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.532538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.532785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.532798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.532977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.532989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.533222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.533263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.533567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.533606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.533783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.533821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.534095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.534134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.534476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.534489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.534742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.534757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.534928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.534941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.732 qpair failed and we were unable to recover it. 00:38:36.732 [2024-07-10 23:42:45.535119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.732 [2024-07-10 23:42:45.535132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.535418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.535459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.535641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.535680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.535968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.536006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.536223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.536264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.536551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.536591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.536909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.536948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.537250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.537291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.537547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.537586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.537816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.537829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.538104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.538117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.538346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.538359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.538553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.538566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.538815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.538828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.538934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.538946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.539204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.539217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.539392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.539404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.539667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.539706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.539917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.539956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.540238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.540277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.540391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.540404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.540610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.540623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.540886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.540898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.541175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.541188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.541357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.541370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.541614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.541654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.541861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.541899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.542210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.542251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.542504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.542517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.542623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.542636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.542801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.542814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.543049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.543089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.543339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.543379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.543645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.543658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.543908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.543921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.544175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.544188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.544424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.544436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.544616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.544628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.544883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.544927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.545275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.545317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.545604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.545620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.733 [2024-07-10 23:42:45.545872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.733 [2024-07-10 23:42:45.545884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.733 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.546142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.546154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.546392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.546405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.546655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.546668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.546924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.546937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.547106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.547118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.547368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.547381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.547552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.547565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.547825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.547838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.548010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.548028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.548227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.548239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.548481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.548493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.548787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.548800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.548976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.548988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.549240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.549254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.549460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.549500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.549811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.549850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.550167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.550208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.550512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.550551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.550778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.550790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.550892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.550905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.551186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.551226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.551470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.551509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.551770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.551809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.552120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.552176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.552460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.552500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.552798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.552837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.553120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.553170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.553442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.553455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.553702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.734 [2024-07-10 23:42:45.553725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.734 qpair failed and we were unable to recover it. 00:38:36.734 [2024-07-10 23:42:45.553923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.553935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.554136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.554188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.554437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.554477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.554696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.554736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.555043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.555082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.555382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.555395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.555654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.555666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.555915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.555930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.556172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.556183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.556438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.556450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.556709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.556721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.556879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.556890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.557143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.557154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.557331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.557343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.557532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.557544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.557792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.557803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.558056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.558067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.558241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.558253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.558522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.558535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.558779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.558792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.558952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.558964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.559175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.559188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.559432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.559445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.559564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.559576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.559827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.559840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.560015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.560028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.560260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.560274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.560446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.560459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.560693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.560706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.560955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.560968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.561229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.561243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.561419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.561431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.561608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.561639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.561925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.561965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.562291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.562332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.562639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.562679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.562813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.562825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.563053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.563066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.563279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.563292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.563547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.563560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.563732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.735 [2024-07-10 23:42:45.563744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.735 qpair failed and we were unable to recover it. 00:38:36.735 [2024-07-10 23:42:45.563943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.563982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.564272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.564320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.564625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.564679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.564976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.564990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.565213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.565226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.565499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.565513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.565691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.565737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.565969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.566009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.566198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.566239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.566399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.566439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.566740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.566780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.567094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.567107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.567341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.567354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.567611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.567624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.567795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.567808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.567968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.567981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.568197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.568238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.568409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.568449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.568683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.568696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.568870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.568883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.569051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.569064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.569344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.569358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.569542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.569555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.569844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.569884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.570143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.570193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.570415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.570455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.570683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.570696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.570814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.570826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.571022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.571036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.571304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.571318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.571425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.571438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.571601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.571614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.571811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.571823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.572081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.572095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.572273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.572286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.572553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.572565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.572766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.572779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.573045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.573059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.573320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.573333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.573505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.573518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.573690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.736 [2024-07-10 23:42:45.573703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.736 qpair failed and we were unable to recover it. 00:38:36.736 [2024-07-10 23:42:45.573867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.573880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.574131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.574145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.574281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.574294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.574530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.574543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.574653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.574665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.574891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.574906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.575000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.575012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.575233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.575246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.575523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.575536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.575662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.575674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.575767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.575779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.576016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.576029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.576285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.576298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.576428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.576442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.576599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.576613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.576859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.576872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.577103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.577116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.577342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.577355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.577568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.577607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.577893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.577934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.578182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.578223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.578416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.578429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.578630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.578644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.578821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.578835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.579098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.579113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.579411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.579430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.579605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.579619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.579941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.579980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.580308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.580322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.580582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.580595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.580849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.580862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.581022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.581035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.581239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.581253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.581359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.581372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.581624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.581637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.581842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.581855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.581977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.581989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.582230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.582244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.582471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.582485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.582710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.582723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.582968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.737 [2024-07-10 23:42:45.582982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.737 qpair failed and we were unable to recover it. 00:38:36.737 [2024-07-10 23:42:45.583167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.583181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.583365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.583378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.583624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.583637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.583927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.583940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.584115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.584131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.584383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.584397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.584594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.584607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.584872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.584884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.585164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.585178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.585421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.585434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.585709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.585722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.585845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.585858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.586059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.586073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.586324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.586337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.586633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.586646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.586839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.586852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.587122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.587135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.587266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.587280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.587456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.587469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.587729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.587742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.588013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.588027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.588281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.588295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.588522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.588536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.588792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.588805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.588966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.588979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.589232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.589246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.589363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.589377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.738 [2024-07-10 23:42:45.589600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.738 [2024-07-10 23:42:45.589613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.738 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.589797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.589810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.590068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.590081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.590364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.590377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.590558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.590600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.590938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.590980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.591229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.591270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.591487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.591502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.591681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.591694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.591940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.591953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.592243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.592257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.592469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.592482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.592757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.592771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.593066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.593080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.593191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.593203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.593385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.593399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.593568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.593581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.593858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.593874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.594067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.594080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.594208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.594221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.594371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.594384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.594555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.594569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.594800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.594814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.594919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.594934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.595205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.595224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.595357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.595370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.595568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.595581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.595813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.595826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.595994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.596009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.596206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.596249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.596533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.596574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.596884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.596924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.597182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.597224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.597527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.597566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.597808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.597821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.598030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.598043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.598269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.598283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.598534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.598547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.598785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.598798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.599098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.599112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.599344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.599358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.599537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.739 [2024-07-10 23:42:45.599591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.739 qpair failed and we were unable to recover it. 00:38:36.739 [2024-07-10 23:42:45.599765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.599803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.600072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.600112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.600334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.600386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.600693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.600712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.600834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.600853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.601038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.601057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.601266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.601285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.601478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.601496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.601680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.601699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.601860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.601879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.602130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.602149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.602329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.602344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.602519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.602532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.602659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.602672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.602834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.602848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.603020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.603035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.603228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.603242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.603410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.603424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.603608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.603622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.603809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.603849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.604077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.604116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.604431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.604445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.604638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.604652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.604838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.604851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.605053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.605067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.605266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.605279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.605406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.605419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.605618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.605631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.605927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.605969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.606200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.606242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.606566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.606579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.606787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.606800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.606981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.606995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.607178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.607192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.607314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.607327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.607600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.607614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.607857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.607870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.608036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.608048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.608329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.608343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.608574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.608588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.608783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.740 [2024-07-10 23:42:45.608796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.740 qpair failed and we were unable to recover it. 00:38:36.740 [2024-07-10 23:42:45.609003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.609016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.609195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.609210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.609390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.609403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.609503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.609516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.609632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.609645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.609831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.609844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.610104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.610117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.610284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.610297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.610450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.610463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.610657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.610670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.610786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.610799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.611047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.611060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.611262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.611275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.611497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.611514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.611637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.611657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.611940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.611953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.612205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.612219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.612470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.612484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.612667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.612679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.612909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.612922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.613109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.613123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.613303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.613318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.613568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.613580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.613768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.613781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.613980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.613992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.614276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.614289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.614398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.614412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.614536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.614548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.614668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.614681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.614864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.614879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.615106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.615120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.615345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.615358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.615536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.615549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.615729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.615742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.615902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.615916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.616098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.616110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.616232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.616247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.616437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.616451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.616670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.616683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.616784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.616796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.616973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.741 [2024-07-10 23:42:45.616985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.741 qpair failed and we were unable to recover it. 00:38:36.741 [2024-07-10 23:42:45.617182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.617197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.617295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.617308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.617495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.617508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.617633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.617647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.617768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.617781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.617952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.617965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.618180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.618221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.618402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.618442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.618688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.618736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.618935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.618949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.619059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.619072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.619280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.619293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.619576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.619590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.619719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.619734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.620019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.620034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.620282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.620296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.620479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.620521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.620751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.620792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.621112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.621152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.621461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.621501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.621736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.621749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.621926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.621940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.622052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.622064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.622322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.622339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.622471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.622485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.622693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.622705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.622858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.622871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.623048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.623062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.623360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.623402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.623687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.623726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.624061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.624101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.624332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.624372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.624575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.624590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.624815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.624828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.625063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.625076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.625267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.625308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.742 [2024-07-10 23:42:45.625551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.742 [2024-07-10 23:42:45.625565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.742 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.625700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.625751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.626030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.626068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.626295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.626337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.626657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.626698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.626956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.626968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.627251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.627264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.627440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.627454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.627629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.627643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.627834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.627874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.628152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.628204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.628452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.628465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.628579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.628592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.628765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.628778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.628963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.628976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.629174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.629187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.629369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.629382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.629498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.629514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.629710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.629723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.629946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.629959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.630141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.630156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.630374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.630387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.630622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.630662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.631023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.631063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.631339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.631383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.631688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.631701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.631824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.631837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.632038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.632051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.632204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.632218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.632398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.632411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.632598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.632611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.632831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.632871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.633174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.633234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.633462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.633474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.633650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.633663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.633784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.633797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.633920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.633933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.634114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.634127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.634236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.634249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.634368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.634381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.743 [2024-07-10 23:42:45.634498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.743 [2024-07-10 23:42:45.634510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.743 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.634672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.634686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.634879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.634892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.635005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.635018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.635187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.635203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.635327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.635340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.635498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.635511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.635624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.635637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.635961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.636002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.636317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.636357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.636539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.636579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.636790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.636829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.637008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.637049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.637357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.637399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.637695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.637736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.637945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.637958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.638198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.638213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.638402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.638415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.638593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.638606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.638732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.638744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.638991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.639004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.639221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.639235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.639407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.639421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.639581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.639594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.639780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.639794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.640048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.640067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.640239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.640253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.640440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.640480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.640724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.640765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.640990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.641003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.641228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.641269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.641502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.641542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.641767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.641810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.641964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.641979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.642140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.642154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.642300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.642338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.642532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.642571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.642825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.642838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.643022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.643035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.643304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.643317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.643482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.643523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.744 [2024-07-10 23:42:45.643857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.744 [2024-07-10 23:42:45.643896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.744 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.644049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.644089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.644444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.644477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.644599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.644615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.644816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.644829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.645076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.645089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.645364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.645377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.645608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.645623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.645738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.645752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.645869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.645882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.646055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.646068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.646333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.646376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.646590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.646603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.646771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.646784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.647086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.647099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.647281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.647295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.647405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.647417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.647597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.647610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.647731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.647743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.647855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.647868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.647965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.647977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.648089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.648102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.648227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.648240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.648364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.648377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.648547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.648560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.648710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.648722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.648978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.648992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.649250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.649264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.649433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.649446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.649551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.649563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.649796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.649808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.650058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.650070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.650246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.650259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.650421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.650436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.650605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.650622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.650917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.650930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.651110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.651126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.651305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.651318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.651478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.651491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.651663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.651677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.651790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.745 [2024-07-10 23:42:45.651805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.745 qpair failed and we were unable to recover it. 00:38:36.745 [2024-07-10 23:42:45.651982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.651996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.652263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.652279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.652391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.652407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.652565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.652578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.652697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.652709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.652868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.652880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.653171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.653214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.653427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.653478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.653658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.653697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.653912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.653924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.654109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.654122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.654222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.654235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.654462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.654475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.654606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.654619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.654792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.654806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.654977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.654991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.655219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.655233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.655394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.655407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.655656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.655670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.655859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.655872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.656044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.656057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.656228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.656241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.656414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.656461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.656654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.656693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.656987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.657000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.657227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.657240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.657375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.657388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.657565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.657578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.657693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.657706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.657954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.657967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.658217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.658231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.658460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.658473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.658655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.658668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.658895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.658934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.659174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.659218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.659454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.659493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.659713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.659726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.659872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.659912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.660240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.660283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.660593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.660633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.660849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.746 [2024-07-10 23:42:45.660888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.746 qpair failed and we were unable to recover it. 00:38:36.746 [2024-07-10 23:42:45.661194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.661235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.661549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.661595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.661889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.661929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.662187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.662229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.662415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.662453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.662701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.662714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.662811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.662823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.663013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.663052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.663372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.663417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.663664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.663705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.663936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.663975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.664134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.664187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.664421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.664461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.664774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.664814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.665065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.665105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.665428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.665468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.665656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.665696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.665854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.665867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.666063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.666103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.666336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.666377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.666610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.666649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.666965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.667006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.667264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.667305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.667556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.667596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.667887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.667900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.668103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.668116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.668355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.668368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.668548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.668561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.668742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.668781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.669045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.669085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.669382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.669440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.669738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.669777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.670008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.670048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.670316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.670360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.747 [2024-07-10 23:42:45.670660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.747 [2024-07-10 23:42:45.670700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.747 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.670934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.670972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.671280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.671321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.671511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.671552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.671722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.671761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.671905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.671917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.672033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.672045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.672229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.672245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.672473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.672485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.672641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.672653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.672889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.672929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.673242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.673284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.673542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.673581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.673818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.673868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.674040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.674053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.674258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.674300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.674613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.674652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.674958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.675002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.675177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.675207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.675407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.675448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.675669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.675708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.675945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.675984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.676321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.676362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.676632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.676673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.676971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.677011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.677308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.677349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.677570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.677609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.677979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.677991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.678236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.678249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.678447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.678460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.678655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.678667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.678801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.678815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.678997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.679009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.679181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.679211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.679445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.679485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.679733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.679773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.680088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.680100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.680353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.680366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.680599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.680611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.680796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.680809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.680975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.681015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.748 [2024-07-10 23:42:45.681319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.748 [2024-07-10 23:42:45.681360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.748 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.681603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.681643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.681954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.681994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.682309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.682350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.682581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.682596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.682726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.682765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.683013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.683059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.683367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.683409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.683693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.683732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.683981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.683993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.684182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.684224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.684475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.684514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.684862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.684903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.685063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.685102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.685366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.685407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.685649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.685662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.685912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.685925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.686176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.686190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.686389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.686402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.686593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.686610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.686732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.686745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.686990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.687029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.687310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.687352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.687587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.687631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.687856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.687868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.688040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.688052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.688233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.688275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.688511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.688550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.688725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.688765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.688976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.689015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.689242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.689284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.689466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.689506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.689654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.689666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.689936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.689975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.690224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.690267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.690455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.690494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.690722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.690735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.691054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.691094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.691379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.691422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.691594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.691634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.691907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.749 [2024-07-10 23:42:45.691946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-07-10 23:42:45.692249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.692290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.692606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.692646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.692875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.692887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.693014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.693026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.693261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.693275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.693588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.693635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.693870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.693909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.694132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.694181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.694415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.694455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.694601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.694614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.694802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.694843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.695158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.695208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.695467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.695507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.695744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.695784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.696012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.696051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.696367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.696408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.696642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.696681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.696952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.696992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.697315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.697356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.697642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.697655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.697796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.697839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.698142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.698204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.698488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.698527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.698816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.698851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.699143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.699193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.699369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.699410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.699673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.699713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.699955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.699994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.700185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.700226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.700429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.700469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.700651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.700692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.700931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.700970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.701154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.701173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.701419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.701460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.701687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.701727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.702049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.702061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.702248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.702261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.702387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.702401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.702615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.702655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.702820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.702860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.703141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.750 [2024-07-10 23:42:45.703192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-07-10 23:42:45.703378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.703417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.703670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.703701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.703913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.703925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.704117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.704157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.704418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.704465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.704726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.704766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.705064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.705104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.705291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.705332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.705520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.705561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.705902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.705942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.706263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.706304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.706546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.706585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.706860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.706904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.707189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.707202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.707386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.707398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.707578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.707590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.707767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.707780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.708098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.708137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.708476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.708516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.708766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.708805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.709042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.709055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.709292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.709334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.709564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.709603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.709907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.709948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.710100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.710113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.710298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.710339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.710512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.710551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.710729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.710769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.711127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.711177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.711435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.711474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.711647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.711694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.711952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.711964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.712222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.712264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.712443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.712483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.712661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.712673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-07-10 23:42:45.712832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.751 [2024-07-10 23:42:45.712868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.713130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.713193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.713390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.713430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.713649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.713689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.713970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.714009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.714306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.714349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.714573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.714614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.714877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.714890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.715090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.715105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.715306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.715322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.715605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.715645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.715965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.716004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.716178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.716217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.716385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.716426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.716657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.716669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.716853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.716868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.717043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.717081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.717448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.717490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.717662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.717675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.717964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.718004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.718289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.718331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.718548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.718588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.718819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.718858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.718985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.718999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.719272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.719313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.719660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.719705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.719916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.719928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.720108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.720120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.720296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.720336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.720631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.720675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.720977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.720994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.721318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.721359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.721608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.721646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.721943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.721983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.722206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.722247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.722418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.722458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.722789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.722828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.723183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.723225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.723450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.723489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.723663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.723676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.723987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.724026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.724340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.752 [2024-07-10 23:42:45.724382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.752 qpair failed and we were unable to recover it. 00:38:36.752 [2024-07-10 23:42:45.724596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.724635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.724814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.724853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.725046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.725058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.725151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.725168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.725406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.725443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.725604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.725643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.725819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.725858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.726178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.726194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.726488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.726501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.726681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.726694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.727015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.727055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.727311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.727352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.727530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.727569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.727755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.727794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.728044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.728057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.728170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.728183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.728302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.728316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.728439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.728452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.728623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.728635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.728993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.729034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.729310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.729350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.729592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.729632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.729866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.729906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.730157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.730176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.730291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.730304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.730531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.730544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.730717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.730748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.731052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.731092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.731289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.731331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.731526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.731565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.731741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.731780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.732005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.732017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.732246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.732287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.732534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.732573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.732942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.733028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.733362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.733412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.733656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.733700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.734017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.734058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.734342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.734384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.734679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.734721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.735033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.753 [2024-07-10 23:42:45.735074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.753 qpair failed and we were unable to recover it. 00:38:36.753 [2024-07-10 23:42:45.735235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.735276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.735465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.735505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.735666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.735685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.735933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.735973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.736211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.736256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.736546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.736587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.736780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.736828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.737121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.737179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.737416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.737457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.737712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.737752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.737937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.737978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.738272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.738293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.738526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.738545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.738747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.738764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.739005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.739023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.739330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.739371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.739559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.739600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.739851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.739905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.740144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.740169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.740377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.740396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.740611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.740630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.740770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.740787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.741117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.741157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.741424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.741465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.741621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.741661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.741971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.741990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.742120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.742138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.742351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.742392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.742608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.742648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.742970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.743008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.743288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.743330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.743629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.743679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.743877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.743895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.744118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.744153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.744309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.744326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.744477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.744491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.744629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.744642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.744815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.744828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.745085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.745126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.745383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.745425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.745605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.754 [2024-07-10 23:42:45.745646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.754 qpair failed and we were unable to recover it. 00:38:36.754 [2024-07-10 23:42:45.745972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.746012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.746294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.746336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.746505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.746545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.746818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.746858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.747024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.747063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.747225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.747277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.747541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.747581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.747759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.747799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.748083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.748123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.748350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.748365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.748485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.748503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.748672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.748685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.748882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.748895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.749062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.749076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.749247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.749261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.749366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.749380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.749524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.749564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.749738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.749778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.749998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.750038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.750270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.750284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.750517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.750530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.750664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.750704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.750965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.751005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.751260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.751301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.751544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.751583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.751825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.751865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.752056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.752095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.752325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.752338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.752447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.752461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.752604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.752617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.752834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.752848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.752966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.752979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.753186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.753210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.753429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.753475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.753738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.753779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.754009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.755 [2024-07-10 23:42:45.754049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.755 qpair failed and we were unable to recover it. 00:38:36.755 [2024-07-10 23:42:45.754322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.754366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.754615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.754656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.754832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.754873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.755088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.755140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.755408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.755451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.755698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.755739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.756063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.756103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.756286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.756328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.756558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.756598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.756881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.756903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.757114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.757132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.757366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.757384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.757528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.757568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.757755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.757796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.758056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.758096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.758296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.758341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.758502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.758543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.758784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.758824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.759168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.759189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.759391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.759410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.759652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.759671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.759864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.759883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.760004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.760023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.760180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.760200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.760416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.760435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.760572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.760590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.760788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.760807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.760992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.761011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.761209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.761228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.761348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.761369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.761512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.761531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.761658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.761676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.761963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.761982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.762091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.762111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.762404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.762425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.762624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.762643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.762898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.762916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.763103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.763122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.763364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.763384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.763635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.763654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.763779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.756 [2024-07-10 23:42:45.763797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.756 qpair failed and we were unable to recover it. 00:38:36.756 [2024-07-10 23:42:45.763977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.763995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.764290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.764310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.764449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.764467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.764660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.764684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.764922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.764941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.765168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.765187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.765429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.765447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.765693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.765711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.766011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.766029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.766280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.766301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.766496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.766514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.766637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.766655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.766935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.766953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.767085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.767104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.767368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.767387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.767595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.767614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.767826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.767844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.768056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.768075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.768287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.768306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.768431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.768449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.768583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.768602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.768773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.768791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.768907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.768925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.769218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.769236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.769540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.769558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.769706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.769725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.769911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.769929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.770116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.770134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.770421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.770442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.770691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.770710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.770914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.770932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.771066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.771084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.771285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.771304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.771574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.771593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.771830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.771849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.772041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.772064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.772213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.772233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.772434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.772453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.772590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.772608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.772742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.772760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.772945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.757 [2024-07-10 23:42:45.772965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.757 qpair failed and we were unable to recover it. 00:38:36.757 [2024-07-10 23:42:45.773170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.773192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.773310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.773329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.773519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.773537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.773691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.773709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.773966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.773984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.774249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.774270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.774466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.774485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.774675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.774693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.775014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.775034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.775251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.775270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.775461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.775480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.775653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.775671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.775980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.776000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.776195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.776214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.776461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.776479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.776617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.776635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.776871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.776890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.777115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.777134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.777298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.777317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.777502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.777520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.777828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.777847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.777990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.778009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.778256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.778276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.778487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.778512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.778714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.778732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.779025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.779045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.779294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.779325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.779540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.779558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.779666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.779685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.779875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.779895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.780156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.780185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:36.758 [2024-07-10 23:42:45.780362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.758 [2024-07-10 23:42:45.780382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:36.758 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.780521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.780541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.780702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.780746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.780900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.780926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.781094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.781112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.781344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.781363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.781555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.781574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.781846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.781865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.782052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.782072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.782267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.782287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.782502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.782522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.782717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.782736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.782872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.782891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.783012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.783031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.783257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.783276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.783421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.783441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.783570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.783588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.783784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.783806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.784091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.784109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.784379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.784399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.784592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.784611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.784798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.784816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.785093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.785112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.785362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.785381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.785583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.785601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.785888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.785907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.786172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.786190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.786431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.786449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.786635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.786654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.786788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.786806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.787098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.787117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.787419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.787439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.787705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.787723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.787973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.787991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.788123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.788141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.788299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.788318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.788527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.788545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.788679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.788696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.039 qpair failed and we were unable to recover it. 00:38:37.039 [2024-07-10 23:42:45.788934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.039 [2024-07-10 23:42:45.788952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.789152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.789180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.789476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.789494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.789628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.789646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.789903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.789921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.790133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.790154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.790314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.790333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.790510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.790528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.790742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.790760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.790965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.790984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.791260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.791279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.791402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.791420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.791684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.791702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.792039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.792058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.792328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.792358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.792547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.792565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.792751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.792770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.793037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.793056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.793246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.793265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.793397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.793415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.793590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.793608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.793801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.793819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.794032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.794050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.794166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.794185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.794368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.794386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.794579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.794597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.794904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.794922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.795167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.795186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.795358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.795376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.795682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.795700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.795902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.795920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.796090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.796108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.796422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.796441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.796710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.796729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.797032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.797052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.797182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.797200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.797449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.797466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.797779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.797798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.798086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.798104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.798360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.798379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.040 qpair failed and we were unable to recover it. 00:38:37.040 [2024-07-10 23:42:45.798564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.040 [2024-07-10 23:42:45.798582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.798771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.798789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.799089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.799107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.799386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.799404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.799519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.799537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.799730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.799751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.800009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.800026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.800214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.800233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.800410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.800429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.800604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.800622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.800864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.800882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.801122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.801141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.801403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.801421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.801648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.801667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.801932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.801950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.802209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.802228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.802488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.802506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.802726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.802744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.803004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.803022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.803311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.803330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.803580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.803598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.803803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.803822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.804061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.804079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.804287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.804306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.804567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.804585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.804769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.804791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.805032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.805050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.805321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.805340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.805592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.805609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.805875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.805893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.806130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.806148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.806382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.806401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.806515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.806533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.806800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.806818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.806988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.807006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.807197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.807216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.807445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.807463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.807748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.807767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.807983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.808023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.808314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.808333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.808515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.041 [2024-07-10 23:42:45.808533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.041 qpair failed and we were unable to recover it. 00:38:37.041 [2024-07-10 23:42:45.808827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.808845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.809070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.809088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.809280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.809298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.809560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.809578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.809847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.809868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.810075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.810093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.810209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.810227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.810356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.810375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.810643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.810661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.810870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.810888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.811020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.811038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.811205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.811224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.811441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.811480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.811783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.811822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.812121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.812173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.812391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.812431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.812722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.812762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.813062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.813101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.813395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.813436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.813722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.813761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.814075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.814114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.814426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.814466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.814718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.814757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.815016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.815033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.815232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.815251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.815373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.815390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.815658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.815697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.815998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.816038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.816205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.816245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.816424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.816443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.816651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.816668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.816803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.816821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.817081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.817121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.817385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.817425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.817653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.817692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.817958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.817997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.818268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.818287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.818479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.818497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.818761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.818778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.819020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.819038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.042 [2024-07-10 23:42:45.819234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.042 [2024-07-10 23:42:45.819253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.042 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.819373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.819391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.819574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.819613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.819825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.819864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.820128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.820184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.820352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.820390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.820692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.820731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.820903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.820942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.821210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.821252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.821458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.821477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.821593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.821611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.821754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.821772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.821950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.821968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.822099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.822138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.822381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.822421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.822646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.822686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.822980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.823028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.823131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.823149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.823377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.823395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.823667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.823706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.823899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.823943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.824096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.824151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.824344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.824362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.824572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.824612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.824794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.824834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.824984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.825022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.825178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.825220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.825374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.825413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.825573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.825612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.825892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.825931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.826176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.826217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.826459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.826498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.826676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.826717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.043 [2024-07-10 23:42:45.826895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.043 [2024-07-10 23:42:45.826914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.043 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.827101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.827140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.827452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.827492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.827712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.827751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.827913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.827953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.828112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.828151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.828403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.828443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.828714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.828753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.828933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.828973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.829133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.829202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.829379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.829418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.829783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.829829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.830121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.830173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.830336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.830379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.830573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.830591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.830796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.830814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.831052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.831069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.831203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.831222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.831353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.831370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.831491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.831509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.831683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.831701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.831813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.831831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.832017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.832035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.832143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.832166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.832281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.832299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.832541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.832559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.832665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.832683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.832807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.832825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.832931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.832949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.833070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.833088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.833272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.833290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.833402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.833420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.833589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.833607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.833820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.833840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.834009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.834028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.834239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.834258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.834393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.834412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.834578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.834596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.834796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.834815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.834942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.834964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.835077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.835095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.044 qpair failed and we were unable to recover it. 00:38:37.044 [2024-07-10 23:42:45.835227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.044 [2024-07-10 23:42:45.835245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.835481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.835499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.835620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.835639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.835884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.835923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.836082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.836120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.836385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.836428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.836536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.836554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.836642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.836659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.836781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.836800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.837029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.837067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.837271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.837321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.837502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.837554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.837719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.837759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.837974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.838013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.838232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.838273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.838508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.838547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.838775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.838815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.838951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.838969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.839075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.839093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.839209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.839228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.839356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.839374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.839502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.839520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.839641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.839659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.839873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.839891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.840023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.840041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.840228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.840246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.840417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.840435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.840573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.840592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.840693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.840711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.840790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.840807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.840977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.840996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.841286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.841327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.841494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.841533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.841675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.841714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.841893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.841933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.842170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.842189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.842283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.842301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.842436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.842477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.842671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.842692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.842915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.842934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.843060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.045 [2024-07-10 23:42:45.843079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.045 qpair failed and we were unable to recover it. 00:38:37.045 [2024-07-10 23:42:45.843194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.843214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.843416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.843435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.843538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.843557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.843682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.843700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.843881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.843899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.844074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.844107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.844367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.844413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.844652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.844693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.844929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.844969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.845187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.845243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.845375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.845394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.845508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.845527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.845645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.845663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.845864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.845882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.846059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.846077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.846190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.846208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.846386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.846404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.846526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.846543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.846831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.846871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.847046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.847087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.847314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.847355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.847560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.847579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.847709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.847727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.847860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.847879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.848050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.848068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.848188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.848208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.848329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.848347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.848480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.848499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.848607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.848625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.848777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.848795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.848961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.848979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.849154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.849199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.849418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.849459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.849634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.849673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.849902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.849942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.850177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.850218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.850348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.850369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.850485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.850504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.850740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.850758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.851007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.851046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.046 [2024-07-10 23:42:45.851204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.046 [2024-07-10 23:42:45.851244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.046 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.851397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.851437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.851673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.851711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.851855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.851894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.852081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.852120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.852263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.852281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.852398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.852416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.852544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.852562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.852671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.852688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.852883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.852904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.853129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.853147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.853280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.853299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.853418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.853436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.853557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.853575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.853747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.853765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.853873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.853891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.854124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.854142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.854260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:37.047 [2024-07-10 23:42:45.854487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.854516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.854730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.854762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.854947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.854970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.855079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.855097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.855239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.855258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.855434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.855456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.855579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.855617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.855808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.855847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.855996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.856035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.856263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.856284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.856559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.856577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.856699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.856718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.856901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.856920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.857045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.857064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.857181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.857199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.857316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.857332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.857428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.857441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.857604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.047 [2024-07-10 23:42:45.857618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.047 qpair failed and we were unable to recover it. 00:38:37.047 [2024-07-10 23:42:45.857782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.857795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.857901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.857915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.858099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.858112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.858340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.858354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.858458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.858472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.858634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.858647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.858746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.858758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.858864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.858877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.859064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.859104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.859441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.859482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.859650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.859690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.859856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.859897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.860031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.860071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.860237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.860271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.860370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.860383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.860511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.860524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.860658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.860672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.860789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.860801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.860978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.860992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.861164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.861178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.861313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.861327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.861489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.861502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.861690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.861703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.861810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.861824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.861938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.861951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.862060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.862073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.862179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.862195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.862292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.862308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.862398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.862411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.862518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.862531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.862703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.862716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.862822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.862835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.863001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.863015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.863117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.863130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.863242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.863255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.863362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.863375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.863497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.863512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.863683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.863702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.863883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.863896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.863993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.864006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.864107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.864120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.048 qpair failed and we were unable to recover it. 00:38:37.048 [2024-07-10 23:42:45.864316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.048 [2024-07-10 23:42:45.864330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.864427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.864441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.864535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.864548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.864648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.864661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.864739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.864752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.864849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.864862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.864970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.864983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.865095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.865108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.865197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.865210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.865316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.865329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.865428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.865441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.865552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.865565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.865674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.865686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.865811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.865824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.865983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.865996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.866170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.866184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.866358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.866372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.866537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.866550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.866658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.866671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.866797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.866810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.866904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.866917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.867011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.867024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.867192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.867206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.867366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.867380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.867484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.867497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.867599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.867612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.867731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.867746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.867973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.867986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.868158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.868176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.868354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.868366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.868474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.868487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.868595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.868609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.868714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.868727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.868841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.868854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.869030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.869042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.869165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.869178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.869361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.869374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.869474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.869487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.869606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.869620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.869716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.869728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.869822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.049 [2024-07-10 23:42:45.869836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.049 qpair failed and we were unable to recover it. 00:38:37.049 [2024-07-10 23:42:45.869948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.869960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.870055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.870068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.870301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.870314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.870483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.870496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.870662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.870675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.870784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.870797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.870878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.870891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.870989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.871003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.871236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.871250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.871366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.871379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.871480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.871493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.871603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.871616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.871857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.871898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.872232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.872272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.872491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.872504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.872699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.872712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.872887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.872900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.873020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.873033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.873198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.873212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.873324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.873342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.873445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.873459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.873662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.873700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.873920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.873959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.874111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.874151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.874294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.874307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.874408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.874423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.874526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.874539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.874699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.874712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.874885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.874898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.875011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.875025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.875182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.875196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.875386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.875399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.875502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.875515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.875635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.875648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.875759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.875772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.875879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.875891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.876009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.876022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.876117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.876130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.876229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.876243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.876471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.876484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.876597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.876609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.050 qpair failed and we were unable to recover it. 00:38:37.050 [2024-07-10 23:42:45.876773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.050 [2024-07-10 23:42:45.876786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.876951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.876964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.877081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.877094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.877261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.877275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.877502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.877515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.877632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.877644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.877905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.877945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.878107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.878146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.878397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.878445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.878539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.878552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.878654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.878667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.878780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.878792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.878949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.878961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.879071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.879084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.879186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.879199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.879319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.879333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.879501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.879514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.879672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.879685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.879805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.879818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.879908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.879921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.880022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.880035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.880128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.880142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.880317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.880330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.880490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.880503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.880595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.880610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.880718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.880730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.880823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.880836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.881005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.881018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.881198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.881210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.881370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.881383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.881614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.881627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.881788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.881801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.882010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.882023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.882205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.882218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.882347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.882360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.882457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.051 [2024-07-10 23:42:45.882470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.051 qpair failed and we were unable to recover it. 00:38:37.051 [2024-07-10 23:42:45.882702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.882716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.882808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.882821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.882982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.882995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.883220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.883234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.883324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.883337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.883585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.883599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.883719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.883735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.883980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.884020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.884243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.884284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.884570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.884584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.884744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.884756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.884921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.884934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.885117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.885129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.885306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.885319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.885494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.885507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.885623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.885636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.885757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.885770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.885871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.885883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.886030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.886043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.886201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.886238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.886524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.886564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.886724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.886763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.886919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.886959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.887104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.887143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.887314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.887353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.887610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.887623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.887876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.887889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.888039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.888053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.888217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.888233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.888383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.888396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.888498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.888511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.888688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.888701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.888818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.888832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.888912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.888924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.889047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.889060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.889182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.889195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.889398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.889411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.889586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.889599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.889755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.889767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.889872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.889885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.889989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.052 [2024-07-10 23:42:45.890002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.052 qpair failed and we were unable to recover it. 00:38:37.052 [2024-07-10 23:42:45.890098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.890111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.890217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.890229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.890414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.890454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.890671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.890711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.890891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.890930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.891087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.891127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.891301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.891341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.891582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.891594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.891757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.891770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.891964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.892003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.892286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.892326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.892590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.892630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.892928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.892967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.893227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.893267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.893453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.893492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.893726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.893766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.893930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.893970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.894128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.894141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.894352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.894366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.894547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.894559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.894738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.894751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.894887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.894901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.895069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.895082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.895277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.895291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.895465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.895478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.895656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.895669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.895745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.895757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.895931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.895946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.896120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.896133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.896297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.896318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.896490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.896503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.896612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.896625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.896871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.896884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.896989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.897002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.897107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.897120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.897299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.897312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.897497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.897510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.897693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.897706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.897871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.897884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.897979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.897992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.053 qpair failed and we were unable to recover it. 00:38:37.053 [2024-07-10 23:42:45.898098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.053 [2024-07-10 23:42:45.898111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.898225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.898239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.898348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.898362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.898503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.898515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.898623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.898636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.898753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.898766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.898868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.898883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.899086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.899100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.899215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.899229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.899327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.899340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.899504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.899516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.899680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.899691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.899773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.899786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.899880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.899891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.900015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.900053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.900278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.900319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.900554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.900595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.900770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.900785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.900889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.900901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.900985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.900997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.901108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.901121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.901232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.901244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.901350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.901362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.901526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.901539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.901617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.901629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.901737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.901749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.901840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.901852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.902068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.902083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.902187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.902200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.902310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.902322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.902546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.902559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.902647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.902659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.902821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.902834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.902932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.902944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.903104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.903117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.903300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.903313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.903429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.903442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.903616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.903630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.903724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.903736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.903919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.903932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.904127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.054 [2024-07-10 23:42:45.904140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.054 qpair failed and we were unable to recover it. 00:38:37.054 [2024-07-10 23:42:45.904341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.904355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.904509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.904522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.904625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.904641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.904865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.904879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.904983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.904996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.905111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.905124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.905227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.905240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.905406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.905419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.905519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.905531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.905636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.905649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.905820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.905834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.906066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.906079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.906244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.906258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.906367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.906391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.906666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.906689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.906822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.906849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.906967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.906981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.907148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.907165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.907274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.907291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.907471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.907484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.907603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.907616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.907790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.907804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.907901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.907913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.908072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.908085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.908271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.908284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.908456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.908469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.908541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.908554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.908723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.908736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.908858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.908872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.909057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.909070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.909171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.909183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.909297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.909310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.909506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.909519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.909624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.909636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.909795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.909808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.909919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.909930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.910177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.910191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.910314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.910327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.910424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.910436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.910617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.910631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.055 qpair failed and we were unable to recover it. 00:38:37.055 [2024-07-10 23:42:45.910729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.055 [2024-07-10 23:42:45.910741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.910850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.910863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.910987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.911000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.911117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.911129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.911291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.911305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.911427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.911440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.911608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.911621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.911783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.911796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.911913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.911925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.912088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.912101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.912264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.912277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.912385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.912399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.912606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.912620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.912830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.912851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.912993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.913015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.913266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.913292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.913510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.913525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.913621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.913634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.913752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.913764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.913871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.913885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.914058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.914070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.914270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.914284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.914459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.914500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.914782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.914822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.914975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.915014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.915173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.915214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.915458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.915475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.915594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.915607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.915837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.915851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.915960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.915973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.916078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.916091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.916214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.916229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.916336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.916349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.916520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.916533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.916698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.916712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.916881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.056 [2024-07-10 23:42:45.916894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.056 qpair failed and we were unable to recover it. 00:38:37.056 [2024-07-10 23:42:45.917000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.917014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.917194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.917207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.917366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.917379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.917470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.917482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.917606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.917619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.917727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.917739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.917918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.917931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.918028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.918041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.918153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.918171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.918274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.918306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.918479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.918491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.918615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.918628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.918794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.918807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.918916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.918929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.919112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.919125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.919209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.919223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.919358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.919372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.919490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.919502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.919630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.919644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.919839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.919852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.920039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.920077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.920310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.920351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.920519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.920557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.920733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.920746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.920918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.920932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.921048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.921061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.921258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.921271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.921440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.921453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.921623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.921636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.921749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.921761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.921880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.921893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.921998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.922011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.922173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.922186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.922299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.922312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.922413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.922426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.922546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.922559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.922727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.922740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.922896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.922910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.923022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.923036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.923152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.057 [2024-07-10 23:42:45.923186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.057 qpair failed and we were unable to recover it. 00:38:37.057 [2024-07-10 23:42:45.923363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.923377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.923534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.923547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.923647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.923659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.923755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.923767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.923878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.923892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.924006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.924019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.924185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.924199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.924373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.924386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.924543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.924555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.924765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.924778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.924897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.924910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.925089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.925104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.925240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.925253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.925438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.925452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.925663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.925703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.925851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.925891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.926034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.926072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.926246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.926261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.926421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.926434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.926538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.926551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.926785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.926799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.926912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.926926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.927019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.927032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.927215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.927229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.927400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.927415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.927525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.927538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.927711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.927724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.927887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.927900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.927992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.928005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.928166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.928180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.928362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.928401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.928549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.928587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.928743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.928783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.929026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.929086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.929270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.929311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.929462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.929475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.929715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.929754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.929910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.929949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.930106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.930143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.930391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.930403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.930505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.058 [2024-07-10 23:42:45.930518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.058 qpair failed and we were unable to recover it. 00:38:37.058 [2024-07-10 23:42:45.930625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.930638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.930754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.930766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.930879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.930891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.931050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.931064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.931189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.931202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.931376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.931389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.931559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.931572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.931733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.931746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.931836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.931849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.932086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.932098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.932208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.932221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.932399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.932412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.932649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.932662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.932786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.932799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.932960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.932974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.933237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.933250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.933363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.933378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.933485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.933498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.933659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.933673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.933831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.933844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.933953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.933965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.934150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.934200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.934423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.934463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.934679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.934718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.934937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.934977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.935149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.935200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.935361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.935399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.935671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.935683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.935861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.935873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.936030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.936044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.936142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.936156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.936351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.936364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.936470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.936483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.936646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.936660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.936781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.936795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.936898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.936911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.937115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.937127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.937296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.937309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.937415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.937428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.937543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.937555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.059 [2024-07-10 23:42:45.937652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.059 [2024-07-10 23:42:45.937665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.059 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.937763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.937776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.937883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.937895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.938002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.938015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.938140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.938153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.938267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.938281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.938472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.938486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.938594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.938607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.938714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.938727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.938833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.938845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.938940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.938953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.939131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.939143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.939249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.939263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.939367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.939379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.939479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.939492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.939621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.939636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.939825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.939845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.940008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.940021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.940122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.940135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.940253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.940267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.940346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.940358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.940453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.940466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.940574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.940587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.940679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.940692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.940791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.940805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.940911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.940924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.941024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.941038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.941140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.941152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.941324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.941337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.941441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.941454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.941616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.941630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.941734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.941747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.941864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.941876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.942048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.942062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.942242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.942255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.060 [2024-07-10 23:42:45.942356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.060 [2024-07-10 23:42:45.942369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.060 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.942469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.942481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.942639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.942652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.942826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.942839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.943000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.943013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.943118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.943131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.943229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.943242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.943428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.943442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.943542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.943554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.943644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.943656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.943775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.943788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.943902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.943915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.944008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.944020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.944119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.944132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.944229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.944242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.944340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.944353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.944457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.944470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.944573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.944585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.944762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.944775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.944944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.944957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.945057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.945069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.945165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.945180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.945292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.945305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.945470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.945483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.945582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.945595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.945664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.945676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.945784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.945797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.945958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.945970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.946075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.946092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.946187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.946201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.946307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.946321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.946418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.946431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.946524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.946537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.946640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.946652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.946747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.946759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.946897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.946911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.947019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.947031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.947142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.947154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.947253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.947267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.947422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.947435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.947536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.061 [2024-07-10 23:42:45.947548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.061 qpair failed and we were unable to recover it. 00:38:37.061 [2024-07-10 23:42:45.947635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.947648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.947752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.947764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.947900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.947914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.948004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.948017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.948088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.948108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.948208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.948221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.948322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.948334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.948424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.948436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.948530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.948542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.948645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.948658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.948753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.948765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.948861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.948875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.948967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.948979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.949069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.949081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.949241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.949255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.949350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.949362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.949551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.949563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.949736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.949749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.949825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.949839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.949958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.949971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.950089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.950104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.950207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.950220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.950335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.950348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.950442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.950455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.950533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.950546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.950645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.950657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.950748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.950760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.950944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.950983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.951133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.951180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.951326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.951364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.951499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.951512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.951617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.951630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.951731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.951744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.951847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.951859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.951979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.951992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.952185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.952198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.952317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.952329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.952493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.952505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.952601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.952614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.952773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.952786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.952944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.062 [2024-07-10 23:42:45.952957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.062 qpair failed and we were unable to recover it. 00:38:37.062 [2024-07-10 23:42:45.953060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.953073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.953183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.953196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.953313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.953326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.953468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.953480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.953616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.953629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.953723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.953736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.953838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.953851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.953953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.953966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.954063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.954076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.954183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.954197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.954297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.954311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.954456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.954470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.954569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.954582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.954679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.954692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.954796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.954808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.954904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.954917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.955010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.955023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.955120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.955133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.955230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.955243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.955405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.955421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.955514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.955527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.955702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.955715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.955822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.955836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.955927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.955940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.956035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.956047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.956123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.956135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.956244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.956257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.956353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.956370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.956472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.956485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.956582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.956595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.956704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.956717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.956817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.956829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.956933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.956946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.957040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.957053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.957154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.957171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.957290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.957303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.957410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.957422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.957526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.957538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.957655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.957668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.957834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.957847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.957938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.063 [2024-07-10 23:42:45.957951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.063 qpair failed and we were unable to recover it. 00:38:37.063 [2024-07-10 23:42:45.958040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.958052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.958229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.958242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.958338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.958350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.958445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.958457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.958554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.958567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.958670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.958682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.958782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.958795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.958898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.958911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.959036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.959049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.959144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.959157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.959333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.959346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.959439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.959452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.959552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.959564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.959662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.959675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.959773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.959786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.959881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.959894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.960059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.960072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.960170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.960183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.960319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.960334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.960438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.960451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.960638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.960651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.960793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.960806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.961047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.961059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.961221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.961235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.961341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.961354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.961479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.961492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.961601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.961613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.961712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.961725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.961837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.961850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.961945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.961958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.962067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.962080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.962173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.962186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.962298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.962312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.962415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.962432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.064 [2024-07-10 23:42:45.962526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.064 [2024-07-10 23:42:45.962539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.064 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.962628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.962640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.962802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.962816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.962906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.962919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.963080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.963092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.963214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.963227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.963321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.963334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.963444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.963457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.963558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.963570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.963679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.963693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.963794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.963807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.963914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.963940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.964055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.964074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.964170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.964189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.964462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.964480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.964599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.964617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.964720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.964737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.964863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.964880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.964996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.965014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.965117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.965135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.965268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.965287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.965388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.965406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.965596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.965614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.965718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.965736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.965857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.965878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.965956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.965974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.966149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.966167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.966343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.966357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.966521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.966535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.966639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.966652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.966821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.966834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.966981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.967041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.967266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.967307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.967525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.967538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.967629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.967641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.967748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.967760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.967876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.967890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.967995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.968007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.968103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.968116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.968217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.968230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.968320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.968332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.065 qpair failed and we were unable to recover it. 00:38:37.065 [2024-07-10 23:42:45.968435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.065 [2024-07-10 23:42:45.968448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.968548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.968561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.968652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.968664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.968803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.968816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.968940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.968954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.969042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.969054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.969186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.969201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.969298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.969310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.969483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.969496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.969594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.969606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.969793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.969823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.969986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.970005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.970122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.970148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.970271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.970291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.970379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.970396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.970515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.970533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.970644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.970662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.970765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.970783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.970894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.970912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.971007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.971021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.971132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.971145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.971266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.971279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.971444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.971456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.971554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.971567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.971665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.971678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.971766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.971779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.971894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.971907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.972001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.972013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.972107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.972119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.972232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.972244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.972331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.972343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.972508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.972520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.972615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.972628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.972741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.972755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.972866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.972879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.973041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.973054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.973147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.973164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.973257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.973269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.973361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.973372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.973467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.973479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.973651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.066 [2024-07-10 23:42:45.973665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.066 qpair failed and we were unable to recover it. 00:38:37.066 [2024-07-10 23:42:45.973776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.973789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.973904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.973918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.974014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.974026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.974116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.974128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.974290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.974304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.974467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.974480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.974582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.974595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.974710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.974724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.974831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.974843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.974971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.975011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.975207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.975228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.975425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.975448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.975634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.975674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.975979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.976018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.976187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.976228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.976456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.976495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.976712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.976752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.976953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.976966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.977129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.977142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.977311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.977324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.977492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.977505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.977612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.977625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.977738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.977753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.977948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.977961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.978061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.978074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.978184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.978207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.978332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.978345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.978441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.978454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.978679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.978692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.978803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.978816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.978922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.978935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.979104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.979117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.979295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.979308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.979403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.979416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.979691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.979705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.979803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.979816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.979987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.980000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.980100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.980113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.980226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.980239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.980338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.067 [2024-07-10 23:42:45.980351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.067 qpair failed and we were unable to recover it. 00:38:37.067 [2024-07-10 23:42:45.980520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.980533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.980692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.980704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.980864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.980877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.981078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.981090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.981286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.981299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.981395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.981408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.981574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.981587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.981696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.981708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.981885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.981898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.982070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.982151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.982332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.982375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.982529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.982569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.982811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.982830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.983005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.983023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.983259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.983277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.983541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.983559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.983689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.983706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.983828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.983846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.983951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.983965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.984061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.984074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.984169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.984183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.984290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.984302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.984417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.984431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.984527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.984540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.984674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.984688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.984898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.984911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.985064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.985076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.985235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.985248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.985370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.985383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.985481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.985494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.985596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.985609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.985815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.985827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.985933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.985946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.986150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.986167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.986256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.986269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.986443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.986455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.986551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.986564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.986719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.986732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.986818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.986831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.986936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.986949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.068 qpair failed and we were unable to recover it. 00:38:37.068 [2024-07-10 23:42:45.987116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.068 [2024-07-10 23:42:45.987169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.987332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.987371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.987595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.987634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.987776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.987789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.987950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.987963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.988129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.988141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.988256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.988270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.988420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.988433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.988615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.988627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.988745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.988758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.988876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.988890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.988993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.989007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.989149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.989166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.989286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.989299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.989478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.989491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.989596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.989609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.989781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.989794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.989959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.989972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.990069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.990086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.990219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.990232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.990412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.990425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.990594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.990607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.990725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.990741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.990862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.990874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.990979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.990992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.991151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.991168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.991281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.991293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.991391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.991404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.991564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.991576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.991671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.991685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.991854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.991867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.992044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.992056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.992170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.992183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.992412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.992425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.992550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.992563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.992671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.992685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.069 [2024-07-10 23:42:45.992797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.069 [2024-07-10 23:42:45.992811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.069 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.992918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.992932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.993027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.993040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.993129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.993141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.993254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.993267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.993429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.993442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.993555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.993567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.993666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.993679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.993777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.993790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.993907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.993920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.994029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.994042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.994214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.994227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.994401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.994414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.994594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.994608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.994794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.994807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.994918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.994931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.995087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.995099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.995207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.995221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.995417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.995430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.995536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.995548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.995649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.995662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.995763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.995776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.995962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.995975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.996138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.996150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.996269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.996282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.996445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.996458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.996620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.996633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.996750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.996763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.996869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.996882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.996982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.996996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.997235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.997248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.997360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.997373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.997464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.997477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.997586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.997599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.997857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.997869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.998063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.998076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.998187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.998200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.998379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.998392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.998529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.998542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.998643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.998656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.998907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.998920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.070 [2024-07-10 23:42:45.999093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.070 [2024-07-10 23:42:45.999105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.070 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:45.999228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:45.999241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:45.999397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:45.999410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:45.999513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:45.999525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:45.999716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:45.999731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:45.999913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:45.999930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.000090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.000103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.000196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.000210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.000326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.000340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.000504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.000517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.000620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.000633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.000749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.000761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.000894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.000910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.001011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.001023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.001252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.001265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.001425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.001438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.001547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.001560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.001655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.001668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.001743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.001756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.001927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.001940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.002183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.002196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.002293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.002307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.002480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.002493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.002589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.002601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.002697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.002710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.002879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.002893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.003021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.003034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.003148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.003166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.003355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.003368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.003474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.003487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.003594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.003607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.003700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.003712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.003815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.003827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.003951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.003964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.004150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.004166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.004275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.004288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.004403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.004415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.004534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.004547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.004833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.004871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.005037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.005077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.005320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.005360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.071 [2024-07-10 23:42:46.005520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.071 [2024-07-10 23:42:46.005533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.071 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.005770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.005783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.005889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.005902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.006078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.006118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.006386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.006426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.006595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.006634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.006839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.006852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.007046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.007059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.007157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.007174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.007351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.007363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.007471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.007500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.007678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.007693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.007857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.007870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.008065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.008104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.008353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.008394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.008622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.008662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.008806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.008846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.009148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.009203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.009354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.009367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.009511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.009524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.009615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.009628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.009858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.009871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.009946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.009958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.010118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.010131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.010236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.010250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.010365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.010377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.010536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.010549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.010654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.010669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.010765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.010783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.010943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.010956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.011134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.011147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.011339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.011352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.011522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.011535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.011639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.011652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.011756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.011768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.011941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.011953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.012079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.012092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.012195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.012209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.012356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.012369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.012462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.012475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.012578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.012590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.012729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.072 [2024-07-10 23:42:46.012742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.072 qpair failed and we were unable to recover it. 00:38:37.072 [2024-07-10 23:42:46.012995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.013008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.013098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.013110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.013296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.013309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.013494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.013507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.013671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.013684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.013778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.013791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.013874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.013886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.013978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.013991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.014094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.014107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.014273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.014289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.014385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.014398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.014558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.014571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.014674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.014687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.014777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.014790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.014905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.014918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.015074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.015087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.015206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.015219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.015382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.015395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.015496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.015509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.015611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.015624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.015791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.015804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.015919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.015933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.016091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.016104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.016271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.016284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.016375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.016390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.016620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.016633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.016728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.016741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.016841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.016854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.016957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.016970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.017069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.017081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.017194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.017206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.017382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.017394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.017499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.017511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.017612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.017624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.017737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.017750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.073 [2024-07-10 23:42:46.017854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.073 [2024-07-10 23:42:46.017867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.073 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.017972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.017985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.018090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.018104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.018203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.018216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.018310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.018323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.018433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.018446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.018537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.018550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.018639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.018652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.018823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.018835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.019019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.019032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.019139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.019151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.019260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.019273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.019432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.019444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.019541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.019553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.019679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.019697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.019858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.019876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.019960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.019974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.020067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.020080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.020174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.020187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.020258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.020270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.020370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.020383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.020478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.020490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.020717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.020730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.020813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.020826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.021005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.021017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.021181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.021194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.021288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.021301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.021471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.021483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.021658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.021671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.021775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.021788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.021947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.021960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.022092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.022105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.022213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.022226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.022330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.022343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.022623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.022635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.022749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.022763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.022862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.022874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.022985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.022998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.023109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.023121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.023223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.023236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.023339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.023352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.074 [2024-07-10 23:42:46.023458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.074 [2024-07-10 23:42:46.023471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.074 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.023578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.023591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.023696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.023708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.023801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.023813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.023902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.023915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.024017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.024030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.024198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.024211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.024300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.024313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.024408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.024420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.024578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.024591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.024690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.024703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.024803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.024815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.024918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.024930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.025029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.025043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.025172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.025185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.025289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.025302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.025371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.025383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.025481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.025494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.025603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.025616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.025804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.025818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.025911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.025923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.026031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.026044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.026214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.026227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.026331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.026343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.026442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.026455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.026556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.026569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.026676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.026688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.026859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.026872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.027035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.027048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.027217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.027230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.027392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.027404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.027580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.027620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.027778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.027818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.027961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.027999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.028146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.028208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.028358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.028397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.028551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.028590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.028812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.028826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.028999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.029016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.029103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.029115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.029318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.029332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.075 [2024-07-10 23:42:46.029501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.075 [2024-07-10 23:42:46.029537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.075 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.029692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.029731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.029871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.029911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.030050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.030089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.030258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.030298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.030456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.030496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.030754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.030767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.030872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.030885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.031040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.031053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.031165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.031179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.031274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.031287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.031403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.031416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.031578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.031594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.031757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.031770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.031873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.031912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.032062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.032101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.032299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.032341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.032542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.032555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.032656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.032669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.032831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.032843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.032945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.032959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.033073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.033086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.033211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.033225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.033321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.033334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.033444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.033457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.033567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.033580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.033684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.033697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.033790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.033803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.033901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.033914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.034012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.034025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.034189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.034203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.034310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.034323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.034483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.034496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.034663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.034677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.034766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.034778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.034920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.034933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.035111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.035124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.035225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.035238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.035348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.035361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.035488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.035503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.035672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.035685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.076 [2024-07-10 23:42:46.035784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.076 [2024-07-10 23:42:46.035797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.076 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.035900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.035912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.036011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.036024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.036218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.036231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.036345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.036358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.036518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.036531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.036719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.036759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.036961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.037001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.037136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.037188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.037360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.037401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.037573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.037586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.037759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.037774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.037874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.037887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.037999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.038012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.038173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.038187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.038285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.038297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.038409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.038422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.038518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.038531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.038633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.038646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.038761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.038775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.038867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.038884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.039040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.039053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.039149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.039165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.039264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.039277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.039488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.039501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.039600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.039613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.039759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.039772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.039871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.039884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.039996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.040009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.040182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.040195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.040298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.040312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.040417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.040430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.040654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.040667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.040763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.040776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.040877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.040889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.077 [2024-07-10 23:42:46.040994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.077 [2024-07-10 23:42:46.041007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.077 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.041172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.041185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.041302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.041314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.041396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.041409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.041591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.041603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.041703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.041716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.041876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.041888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.041980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.041993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.042090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.042103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.042211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.042224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.042333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.042346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.042441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.042453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.042634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.042647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.042749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.042762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.042855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.042867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.042973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.042986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.043076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.043090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.043188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.043201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.043327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.043341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.043514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.043527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.043624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.043637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.043798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.043813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.043906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.043922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.044018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.044030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.044130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.044142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.044253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.044265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.044373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.044385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.044546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.044558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.044736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.044749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.044846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.044858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.044952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.044965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.045140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.045153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.045247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.045259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.045350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.045362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.045555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.045568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.045656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.045668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.045774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.045787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.045878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.045890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.045979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.045991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.046092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.046104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.046200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.046212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.078 [2024-07-10 23:42:46.046320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.078 [2024-07-10 23:42:46.046332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.078 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.046436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.046449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.046653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.046691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.046845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.046884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.047026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.047066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.047192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.047206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.047324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.047336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.047434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.047446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.047618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.047631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.047746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.047759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.047897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.047911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.048002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.048018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.048118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.048131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.048292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.048306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.048409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.048423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.048595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.048611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.048702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.048714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.048874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.048886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.048980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.048992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.049083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.049096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.049191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.049211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.049304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.049317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.049482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.049494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.049598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.049610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.049767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.049780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.049970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.049983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.050086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.050099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.050202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.050214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.050324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.050337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.050441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.050454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.050552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.050564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.050678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.050691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.050941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.050954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.053221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.053234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.053408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.053420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.053529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.053541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.053700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.053719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.053889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.053901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.054011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.054023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.054255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.054268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.054377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.054390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.054486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.054498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.054597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.054620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.054816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.079 [2024-07-10 23:42:46.054844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.079 qpair failed and we were unable to recover it. 00:38:37.079 [2024-07-10 23:42:46.054964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.054987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.055191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.055206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.055317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.055330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.055431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.055443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.055600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.055613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.055719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.055732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.055958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.055971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.056076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.056089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.056191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.056204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.056298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.056311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.056382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.056394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.056578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.056593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.056705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.056720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.056891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.056904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.057010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.057023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.057125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.057138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.057248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.057261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.057363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.057381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.057485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.057498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.057581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.057593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.057763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.057775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.057878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.057891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.057984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.057997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.058107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.058120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.058346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.058359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.058454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.058466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.058559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.058572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.058677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.058689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.058851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.058864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.059131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.059144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.059331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.059344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.059532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.059545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.059702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.059714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.059827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.059840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.060000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.060018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.060272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.060285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.060463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.060476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.060585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.060597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.060726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.060747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.060879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.060904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.061024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.061045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.061146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.061165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.061326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.061339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.061450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.061463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.080 [2024-07-10 23:42:46.061644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.080 [2024-07-10 23:42:46.061657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.080 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.061822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.061835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.061929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.061941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.062099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.062112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.062281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.062294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.062478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.062492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.062666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.062679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.062783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.062798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.062914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.062927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.063089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.063102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.063225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.063238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.063331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.063343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.063446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.063460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.063552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.063565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.063734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.063747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.063850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.063863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.064092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.064105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.064200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.064212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.064325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.064341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.064507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.064520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.064619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.064631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.064744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.064757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.064859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.064873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.065031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.065044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.065149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.065166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.065269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.065282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.065453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.065466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.065572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.065585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.065675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.065688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.065865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.065877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.066031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.066077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.066235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.066276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.066514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.066553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.066644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.066656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.066781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.066802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.066920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.066939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.067107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.067125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.067249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.067268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.067532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.081 [2024-07-10 23:42:46.067550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.081 qpair failed and we were unable to recover it. 00:38:37.081 [2024-07-10 23:42:46.067724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.067743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.067956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.067975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.068214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.068233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.068381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.068400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.068527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.068573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.068811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.068829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.069008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.069026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.069218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.069236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.069346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.069367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.069475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.069493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.069606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.069624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.069838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.069857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.070026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.070044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.070166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.070181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.070298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.070311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.070420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.070433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.070658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.070671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.070765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.070778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.070951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.070964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.071082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.071096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.071201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.071214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.071332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.071345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.071469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.071482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.071579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.071591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.071710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.071723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.071901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.071914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.072048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.072060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.072243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.072256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.072350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.072362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.082 [2024-07-10 23:42:46.072574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.082 [2024-07-10 23:42:46.072612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.082 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.072914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.072953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.073174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.073214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.073390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.073429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.073659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.073699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.073969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.073982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.074104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.074129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.074402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.074423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.074557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.074575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.074814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.074832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.075000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.075019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.075232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.075251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.075412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.075429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.075595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.075613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.075863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.075904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.076068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.076108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.076396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.076437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.076605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.076622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.076791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.076809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.077019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.077040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.077163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.077182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.077359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.077377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.077503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.077522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.077709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.077728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.077923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.077940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.078112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.078130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.078317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.078330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.078436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.078448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.078545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.078558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.078796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.078808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.078972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.078984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.079184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.079197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.079438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.079479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.079647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.079686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.079897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.079936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.080082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.080095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.080333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.080346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.080444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.080457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.080669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.080682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.080893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.080906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.081139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.083 [2024-07-10 23:42:46.081152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.083 qpair failed and we were unable to recover it. 00:38:37.083 [2024-07-10 23:42:46.081324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.081338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.081571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.081584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.081701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.081714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.081913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.081926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.082027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.082040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.082234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.082250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.082430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.082443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.082617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.082631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.082819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.082832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.082939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.082952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.083026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.083038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.083204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.083218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.083454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.083494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.083826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.083875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.084060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.084073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.084183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.084196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.084365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.084378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.084446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.084458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.084575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.084588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.084691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.084704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.084898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.084911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.085086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.085099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.085281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.085295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.085401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.085414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.085514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.085527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.085715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.085728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.085877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.085891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.085995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.086008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.084 [2024-07-10 23:42:46.086188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.084 [2024-07-10 23:42:46.086201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.084 qpair failed and we were unable to recover it. 00:38:37.365 [2024-07-10 23:42:46.086388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.365 [2024-07-10 23:42:46.086402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.365 qpair failed and we were unable to recover it. 00:38:37.365 [2024-07-10 23:42:46.086580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.365 [2024-07-10 23:42:46.086594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.365 qpair failed and we were unable to recover it. 00:38:37.365 [2024-07-10 23:42:46.086700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.365 [2024-07-10 23:42:46.086713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.365 qpair failed and we were unable to recover it. 00:38:37.365 [2024-07-10 23:42:46.086892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.365 [2024-07-10 23:42:46.086905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.365 qpair failed and we were unable to recover it. 00:38:37.365 [2024-07-10 23:42:46.087123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.365 [2024-07-10 23:42:46.087170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.365 qpair failed and we were unable to recover it. 00:38:37.365 [2024-07-10 23:42:46.087386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.365 [2024-07-10 23:42:46.087426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.365 qpair failed and we were unable to recover it. 00:38:37.365 [2024-07-10 23:42:46.087637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.365 [2024-07-10 23:42:46.087676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.365 qpair failed and we were unable to recover it. 00:38:37.365 [2024-07-10 23:42:46.087893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.365 [2024-07-10 23:42:46.087933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.365 qpair failed and we were unable to recover it. 00:38:37.365 [2024-07-10 23:42:46.088181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.365 [2024-07-10 23:42:46.088221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.365 qpair failed and we were unable to recover it. 00:38:37.365 [2024-07-10 23:42:46.088378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.365 [2024-07-10 23:42:46.088417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.365 qpair failed and we were unable to recover it. 00:38:37.365 [2024-07-10 23:42:46.088651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.365 [2024-07-10 23:42:46.088690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.365 qpair failed and we were unable to recover it. 00:38:37.365 [2024-07-10 23:42:46.088861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.365 [2024-07-10 23:42:46.088900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.365 qpair failed and we were unable to recover it. 00:38:37.365 [2024-07-10 23:42:46.089179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.365 [2024-07-10 23:42:46.089220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.365 qpair failed and we were unable to recover it. 00:38:37.365 [2024-07-10 23:42:46.089468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.365 [2024-07-10 23:42:46.089507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.365 qpair failed and we were unable to recover it. 00:38:37.365 [2024-07-10 23:42:46.089733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.365 [2024-07-10 23:42:46.089799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.365 qpair failed and we were unable to recover it. 00:38:37.365 [2024-07-10 23:42:46.089942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.365 [2024-07-10 23:42:46.089954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.365 qpair failed and we were unable to recover it. 00:38:37.365 [2024-07-10 23:42:46.090167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.365 [2024-07-10 23:42:46.090183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.365 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.090376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.090389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.090618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.090630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.090712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.090733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.090844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.090857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.091015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.091028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.091134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.091146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.091345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.091359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.091477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.091491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.091653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.091666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.091747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.091758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.091961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.092003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.092217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.092257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.092438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.092453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.092635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.092648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.092723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.092735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.092847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.092859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.092966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.092979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.093150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.093167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.093343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.093356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.093526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.093539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.093792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.093805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.093910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.093923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.094033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.094046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.094178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.094192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.094318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.094331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.094489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.094502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.094681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.094693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.094888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.094928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.095154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.095202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.095356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.095395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.095604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.095643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.095924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.095963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.096200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.096240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.096463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.096502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.096625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.096665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.096875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.096888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.097062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.097075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.097243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.097257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.097363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.097376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.097600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.366 [2024-07-10 23:42:46.097615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.366 qpair failed and we were unable to recover it. 00:38:37.366 [2024-07-10 23:42:46.097812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.097825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.097930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.097943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.098157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.098173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.098359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.098372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.098477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.098491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.098746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.098759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.098937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.098951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.099058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.099071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.099137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.099149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.099317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.099330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.099440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.099453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.099704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.099717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.099825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.099838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.099955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.099968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.100081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.100094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.100191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.100204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.100390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.100403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.100592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.100606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.100775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.100788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.101045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.101058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.101235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.101248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.101352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.101365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.101582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.101595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.101831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.101843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.102112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.102125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.102404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.102417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.102540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.102553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.102744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.102758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.102978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.102996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.103169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.103183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.103273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.103286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.103456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.103468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.103638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.103651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.103766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.103779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.103952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.103965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.104148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.104166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.104372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.104385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.104489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.104503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.104593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.104605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.104794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.104811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.104964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.367 [2024-07-10 23:42:46.104976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.367 qpair failed and we were unable to recover it. 00:38:37.367 [2024-07-10 23:42:46.105080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.105093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.105290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.105303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.105393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.105405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.105589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.105601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.105724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.105736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.105849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.105862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.106112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.106125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.106293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.106306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.106558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.106594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.106759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.106798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.107011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.107050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.107280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.107320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.107606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.107645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.107923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.107962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.108187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.108227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.108476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.108516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.108735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.108774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.109056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.109068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.109320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.109333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.109508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.109520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.109699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.109712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.109882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.109895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.110143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.110156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.110406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.110420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.110587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.110600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.110790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.110830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.111044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.111083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.111386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.111426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.111592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.111632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.111938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.111977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.112291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.112332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.112490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.112530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.112740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.112779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.113008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.113048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.113328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.113369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.113526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.113566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.113799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.113838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.114122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.114170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.114425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.114470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.114680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.114719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.368 [2024-07-10 23:42:46.114966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.368 [2024-07-10 23:42:46.115005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.368 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.115144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.115157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.115281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.115294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.115397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.115410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.115686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.115699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.115800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.115813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.116016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.116028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.116190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.116203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.116301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.116313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.116386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.116398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.116480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.116493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.116611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.116628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.116803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.116816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.116929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.116942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.117099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.117112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.117214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.117227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.117332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.117346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.117446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.117458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.117614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.117627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.117732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.117745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.117902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.117915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.118014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.118027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.118133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.118145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.118367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.118380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.118542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.118555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.118738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.118777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.118940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.118979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.119189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.119229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.119374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.119413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.119560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.119599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.119845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.119884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.120078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.120117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.120434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.120474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.120619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.369 [2024-07-10 23:42:46.120657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.369 qpair failed and we were unable to recover it. 00:38:37.369 [2024-07-10 23:42:46.120888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.120927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.121144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.121221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.121390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.121429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.121634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.121673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.121970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.122009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.122236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.122250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.122426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.122439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.122564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.122578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.122764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.122800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.123083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.123123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.123270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.123311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.123482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.123522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.123726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.123739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.123916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.123928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.124185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.124225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.124379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.124418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.124627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.124666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.124957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.124970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.125236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.125250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.125449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.125462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.125582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.125594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.125713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.125730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.125887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.125900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.126006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.126019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.126210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.126223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.126371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.126396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.126557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.126571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.126690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.126703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.126865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.126878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.126989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.127002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.127121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.127133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.127255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.127268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.127352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.127365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.127610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.127623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.127789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.127803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.127980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.127992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.128109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.128122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.128243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.128256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.128371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.370 [2024-07-10 23:42:46.128384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.370 qpair failed and we were unable to recover it. 00:38:37.370 [2024-07-10 23:42:46.128544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.128557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.128720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.128733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.128893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.128915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.129144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.129157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.129290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.129302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.129465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.129480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.129564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.129576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.129740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.129753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.129928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.129941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.130038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.130051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.130222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.130235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.130406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.130419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.130679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.130719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.130950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.130989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.131167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.131207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.131356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.131396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.131643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.131682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.131974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.132013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.132257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.132270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.132511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.132524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.132692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.132705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.132823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.132836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.133067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.133080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.133237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.133250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.133473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.133486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.133615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.133628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.133731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.133744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.133987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.134000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.134176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.134189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.134351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.134364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.134529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.134542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.134715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.134728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.134903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.134943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.135182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.135223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.135434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.135473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.135740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.135753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.135922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.135935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.136071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.136084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.136211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.136224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.136417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.371 [2024-07-10 23:42:46.136430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.371 qpair failed and we were unable to recover it. 00:38:37.371 [2024-07-10 23:42:46.136596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.136609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.136779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.136792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.136918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.136931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.137136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.137207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.137367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.137407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.137695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.137739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.137882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.137895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.138069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.138082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.138239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.138253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.138375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.138388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.138546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.138559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.138720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.138733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.138924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.138963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.139111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.139151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.139414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.139454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.139670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.139710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.139853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.139892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.140061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.140074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.140323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.140336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.140510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.140523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.140624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.140636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.140828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.140841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.141007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.141020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.141242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.141255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.141432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.141445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.141569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.141582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.141831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.141852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.142015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.142028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.142142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.142155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.142263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.142276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.142420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.142433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.142635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.142648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.142792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.142833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.142992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.143031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.143186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.143227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.143351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.143366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.143457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.143469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.143624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.143636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.143858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.143871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.143956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.372 [2024-07-10 23:42:46.143968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.372 qpair failed and we were unable to recover it. 00:38:37.372 [2024-07-10 23:42:46.144088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.144101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.144203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.144216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.144386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.144399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.144555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.144568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.144789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.144802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.144973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.144988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.145167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.145180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.145435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.145448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.145554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.145566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.145641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.145654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.145820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.145833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.145948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.145961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.146051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.146063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.146182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.146194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.146288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.146300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.146400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.146412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.146571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.146584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.146743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.146756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.146924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.146936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.147026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.147038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.147149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.147173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.147327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.147340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.147582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.147594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.147749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.147762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.147870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.147884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.148037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.148050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.148218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.148231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.148418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.148430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.148597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.148609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.148849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.148862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.148981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.148997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.149169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.149182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.149386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.149410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.149658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.149681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.149872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.149891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.150020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.150040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.150125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.150142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.150340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.150359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.150487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.150506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.150711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.150751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.150997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.373 [2024-07-10 23:42:46.151036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.373 qpair failed and we were unable to recover it. 00:38:37.373 [2024-07-10 23:42:46.151262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.151304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.151606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.151646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.151975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.152014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.152178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.152219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.152386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.152433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.152614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.152658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.152912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.152953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.153089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.153107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.153325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.153375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.153609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.153652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.153907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.153948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.154249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.154289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.154473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.154514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.154743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.154782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.154964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.154984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.155180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.155224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.155446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.155486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.155718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.155757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.155922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.155961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.156283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.156323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.156560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.156601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.156879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.156917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.157216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.157256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.157410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.157449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.157686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.157726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.157925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.157963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.158138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.158187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.158362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.158401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.158556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.158596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.158805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.158822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.158989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.159007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.159215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.159268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.159446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.159500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.374 [2024-07-10 23:42:46.159779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.374 [2024-07-10 23:42:46.159819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.374 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.159997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.160015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.160111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.160130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.160248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.160264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.160437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.160453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.160653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.160666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.160802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.160815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.160986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.161010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.161205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.161219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.161447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.161459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.161581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.161594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.161718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.161734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.161909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.161922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.162014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.162026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.162143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.162156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.162327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.162341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.162434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.162446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.162699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.162738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.163023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.163062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.163342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.163382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.163627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.163669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.163861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.163874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.164043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.164082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.164334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.164376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.164605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.164656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.164812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.164852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.165065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.165078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.165185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.165199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.165377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.165390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.165494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.165507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.165697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.165710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.165818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.165831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.165985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.165998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.166153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.166169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.166363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.166376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.166549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.166562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.166682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.166695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.166886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.166899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.167015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.167028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.167206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.167219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.167461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.375 [2024-07-10 23:42:46.167474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.375 qpair failed and we were unable to recover it. 00:38:37.375 [2024-07-10 23:42:46.167727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.167740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.167919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.167932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.168047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.168060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.168253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.168266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.168488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.168501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.168624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.168637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.168909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.168922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.169211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.169224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.169350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.169363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.169483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.169496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.169665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.169681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.169860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.169873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.170179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.170219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.170444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.170484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.170808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.170847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.171001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.171044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.171220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.171233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.171403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.171416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.171579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.171592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.171840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.171853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.172020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.172033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.172204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.172217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.172324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.172337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.172429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.172441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.172569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.172582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.172753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.172766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.172867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.172880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.172960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.172971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.173224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.173238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.173333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.173345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.173454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.173466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.173650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.173663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.173939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.173979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.174256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.174296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.174530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.174570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.174714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.174727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.174904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.174957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.175321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.175404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.175801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.175883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.176279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.176367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.176635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.376 [2024-07-10 23:42:46.176677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.376 qpair failed and we were unable to recover it. 00:38:37.376 [2024-07-10 23:42:46.176931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.176971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.177196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.177237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.177471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.177510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.177735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.177774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.178060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.178099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.178332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.178372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.178547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.178585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.178756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.178795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.179083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.179139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.179344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.179364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.179537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.179550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.179788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.179827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.180040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.180079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.180387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.180426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.180674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.180713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.180944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.180984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.181118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.181130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.181362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.181403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.181579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.181619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.181788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.181827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.182104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.182144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.182381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.182420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.182643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.182683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.182786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.182799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.183047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.183060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.183220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.183233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.183500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.183539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.183771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.183810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.183992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.184030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.184251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.184292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.184624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.184663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.184894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.184933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.185240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.185280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.185560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.185600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.185761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.185801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.186100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.186139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.186352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.186367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.186562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.186574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.186811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.186824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.187009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.187022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-07-10 23:42:46.187141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.377 [2024-07-10 23:42:46.187154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.187407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.187440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.187605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.187645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.187949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.187988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.188120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.188132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.188398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.188411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.188523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.188536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.188701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.188714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.188835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.188848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.188965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.188978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.189091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.189104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.189267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.189281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.189384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.189399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.189491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.189506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.189754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.189767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.189940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.189953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.190067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.190080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.190246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.190260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.190373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.190387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.190546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.190559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.190736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.190749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.190904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.190917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.191086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.191099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.191364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.191378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.191496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.191509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.191619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.191632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.191810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.191823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.191933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.191945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.192157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.192205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.192442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.192481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.192640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.378 [2024-07-10 23:42:46.192679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-07-10 23:42:46.192904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.192917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.193046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.193061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.193228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.193246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.193373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.193386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.193559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.193572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.193822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.193884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.194050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.194090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.194319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.194359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.194647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.194686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.194899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.194940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.195166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.195179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.195290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.195303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.195427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.195440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.195609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.195621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.195723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.195737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.195910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.195924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.196031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.196043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.196207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.196221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.196384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.196397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.196580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.196593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.196841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.196854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.197038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.197051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.197176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.197190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.197385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.197398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.197556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.197569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.197763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.197775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.197946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.197959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.198135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.198183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.198357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.198397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.198679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.198718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.198894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.198907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.199139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.199186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.199474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.199513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.199672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.199712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-07-10 23:42:46.199985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.379 [2024-07-10 23:42:46.200024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.200252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.200292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.200594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.200634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.200871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.200910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.201145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.201196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.201424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.201463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.201764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.201803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.201957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.201969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.202134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.202178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.202511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.202551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.202725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.202774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.202955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.202969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.203218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.203231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.203404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.203416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.203533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.203546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.203729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.203742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.204006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.204019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.204244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.204256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.204511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.204524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.204630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.204642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.204819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.204831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.205028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.205041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.205207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.205235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.205440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.205479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.205706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.205746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.205966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.205978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.206145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.206157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.206338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.206378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.206703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.206742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.207033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.207045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.207269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.207282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.207420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.207435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.207684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.207701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.207893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.207906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.208064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.208076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.208183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.208196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.208354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.208367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.208574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.208614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.208871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.208911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.209135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.209182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.209396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.380 [2024-07-10 23:42:46.209435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.380 qpair failed and we were unable to recover it. 00:38:37.380 [2024-07-10 23:42:46.209599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.209638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.209849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.209887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.210116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.210155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.210395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.210407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.210656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.210668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.210877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.210890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.211115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.211127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.211247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.211260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.211373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.211386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.211490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.211502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.211770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.211785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.211951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.211964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.212135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.212147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.212257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.212271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.212366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.212380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.212549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.212562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.212664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.212677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.212886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.212899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.213013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.213026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.213248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.213261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.213386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.213399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.213492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.213504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.213608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.213621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.213790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.213803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.213989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.214002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.214122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.214135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.214261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.214275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.214501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.214514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.214672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.214686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.214815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.214828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.214940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.214953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.215051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.215063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.215315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.215329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.215521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.215534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.215645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.215660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.215896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.215908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.216088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.216128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.216310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.216350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.216587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.216626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.216909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.216949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.381 [2024-07-10 23:42:46.217194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.381 [2024-07-10 23:42:46.217208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.381 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.217402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.217415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.217677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.217690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.217851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.217864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.218022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.218035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.218230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.218244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.218420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.218433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.218655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.218668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.218835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.218848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.219019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.219032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.219225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.219241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.219333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.219346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.219525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.219538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.219762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.219775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.219918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.219932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.220099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.220116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.220338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.220352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.220581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.220621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.220776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.220814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.221086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.221100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.221213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.221226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.221349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.221362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.221605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.221618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.221825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.221864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.221994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.222033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.222311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.222351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.222592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.222630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.222913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.222952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.223124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.223172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.223312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.223325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.223434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.223446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.223696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.223709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.223882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.223895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.224065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.224078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.224238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.224252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.224414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.224427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.224599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.224612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.224715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.224728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.224883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.224896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.225088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.225128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.225424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.225506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.382 qpair failed and we were unable to recover it. 00:38:37.382 [2024-07-10 23:42:46.225818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.382 [2024-07-10 23:42:46.225900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.226264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.226304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.226571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.226614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.226777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.226818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.227100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.227138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.227313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.227353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.227583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.227622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.227847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.227885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.228116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.228152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.228317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.228332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.228443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.228456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.228616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.228628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.228799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.228812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.228912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.228925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.229080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.229092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.229265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.229279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.229532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.229571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.229813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.229852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.230087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.230126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.230378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.230434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.230620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.230672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.230995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.231039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.231288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.231307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.231479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.231498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.231589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.231608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.231863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.231883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.232117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.232135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.232274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.232293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.232472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.232513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.232655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.232695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.232861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.232901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.233123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.233178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.233408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.233449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.233684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.233725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.383 [2024-07-10 23:42:46.234035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.383 [2024-07-10 23:42:46.234076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.383 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.234356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.234397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.234599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.234639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.234951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.234992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.235203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.235244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.235420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.235459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.235683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.235724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.236048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.236088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.236278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.236296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.236497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.236515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.236715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.236755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.237043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.237083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.237330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.237371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.237631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.237670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.237922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.237963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.238249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.238296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.238535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.238574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.238854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.238894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.239102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.239120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.239247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.239266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.239473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.239491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.239726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.239744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.239932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.239950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.240251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.240291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.240602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.240643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.240807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.240846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.241021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.241061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.241274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.241292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.241432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.241451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.241661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.241700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.242031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.242071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.242239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.242259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.242409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.242430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.242598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.242619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.242747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.242765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.242947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.242982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.243265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.243305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.243545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.243584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.243743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.243782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.244018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.244058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.244353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.384 [2024-07-10 23:42:46.244372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.384 qpair failed and we were unable to recover it. 00:38:37.384 [2024-07-10 23:42:46.244590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.244609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.244729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.244747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.244931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.244950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.245190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.245215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.245398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.245417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.245603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.245621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.245817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.245836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.245962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.245981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.246164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.246213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.246375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.246414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.246702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.246742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.246900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.246939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.247116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.247155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.247394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.247412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.247620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.247641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.247757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.247776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.247980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.247998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.248103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.248123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.248261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.248280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.248388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.248406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.248641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.248680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.248818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.248858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.249075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.249115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.249346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.249386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.249539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.249578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.249734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.249774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.249930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.249969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.250192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.250233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.250526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.250566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.250849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.250889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.251061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.251102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.251242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.251261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.251436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.251478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.251723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.251763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.252003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.252043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.252258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.252277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.252392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.252410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.252662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.252680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.252865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.252884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.253124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.253143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.253337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.385 [2024-07-10 23:42:46.253378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.385 qpair failed and we were unable to recover it. 00:38:37.385 [2024-07-10 23:42:46.253692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.253777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.254182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.254206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.254395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.254415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.254654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.254673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.254811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.254830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.254941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.254959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.255206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.255226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.255483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.255502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.255684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.255702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.255804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.255823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.256009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.256027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.256288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.256311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.256450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.256468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.256594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.256616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.256760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.256779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.256978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.257019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.257292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.257361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.257586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.257627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.257792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.257831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.258040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.258058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.258262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.258282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.258402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.258420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.258667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.258686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.258934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.258952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.259209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.259229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.259467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.259485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.259623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.259642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.259851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.259870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.260136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.260198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.260444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.260485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.260707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.260748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.261059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.261098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.261288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.261330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.261611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.261651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.261874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.261915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.262136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.262155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.262288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.262306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.262431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.262448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.262568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.262586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.262697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.262715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.386 [2024-07-10 23:42:46.263031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.386 [2024-07-10 23:42:46.263073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.386 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.263278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.263300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.263490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.263510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.263703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.263744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.263981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.264023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.264324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.264369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.264474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.264492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.264662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.264681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.264863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.264881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.265051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.265069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.265252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.265271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.265488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.265530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.265755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.265812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.265978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.266027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.266180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.266199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.266396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.266437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.266715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.266755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.266912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.266953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.267093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.267112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.267325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.267368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.267525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.267565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.267701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.267740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.267990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.268031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.268301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.268342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.268556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.268597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.268904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.268945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.269228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.269269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.269512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.269554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.269791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.269832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.270060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.270101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.270327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.270370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.270633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.270673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.270984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.271024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.271270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.271312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.271540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.271582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.271817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.387 [2024-07-10 23:42:46.271858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.387 qpair failed and we were unable to recover it. 00:38:37.387 [2024-07-10 23:42:46.272021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.272061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.272289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.272308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.272444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.272463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.272573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.272592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.272886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.272918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.273039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.273055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.273150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.273175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.273337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.273351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.273546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.273560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.273739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.273753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.273922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.273935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.274041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.274055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.274172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.274186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.274328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.274342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.274442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.274455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.274642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.274655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.274816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.274829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.274939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.274957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.275116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.275129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.275396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.275410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.275707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.275721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.275814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.275827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.275940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.275954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.276132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.276144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.276286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.276326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.276561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.276605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.276839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.276892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.277056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.277070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.277189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.277203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.277430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.277443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.277546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.277559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.277758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.277783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.277906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.277919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.278178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.278219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.278441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.278481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.278714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.278754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.278906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.278946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.279197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.279238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.279517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.279530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.279635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.279647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.388 [2024-07-10 23:42:46.279876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.388 [2024-07-10 23:42:46.279889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.388 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.280072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.280085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.280264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.280278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.280446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.280459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.280571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.280584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.280781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.280822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.281073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.281114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.281371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.281385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.281573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.281609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.281834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.281874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.282038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.282079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.282243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.282257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.282374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.282388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.282547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.282560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.282720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.282772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.282949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.282989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.283222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.283264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.283387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.283403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.283560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.283574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.283685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.283698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.283856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.283869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.284037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.284051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.284229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.284243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.284423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.284437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.284594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.284607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.284691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.284703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.284817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.284830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.285001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.285015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.285243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.285288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.285452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.285492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.285724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.285763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.286070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.286111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.286366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.286409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.286606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.286620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.286780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.286794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.286952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.286965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.287190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.287232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.287458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.287499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.287800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.287839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.288051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.288091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.288274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.288315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.389 [2024-07-10 23:42:46.288561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.389 [2024-07-10 23:42:46.288574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.389 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.288744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.288757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.288847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.288860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.289148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.289198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.289420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.289443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.289614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.289654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.289875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.289914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.290120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.290138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.290385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.290405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.290524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.290541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.290633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.290653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.290825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.290843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.291030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.291070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.291356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.291397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.291679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.291718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.291910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.291952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.292213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.292234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.292358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.292375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.292505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.292523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.292626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.292645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.292906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.292923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.293133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.293151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.293337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.293356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.293625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.293666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.293905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.293946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.294105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.294123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.294336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.294376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.294613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.294653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.294873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.294912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.295068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.295086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.295288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.295333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.295620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.295659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.295821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.295861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.296027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.296067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.296229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.296270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.296456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.296468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.296639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.296676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.296790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.296803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.297028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.297040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.297145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.297157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.297363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.297403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.390 qpair failed and we were unable to recover it. 00:38:37.390 [2024-07-10 23:42:46.297627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.390 [2024-07-10 23:42:46.297666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.297912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.297951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.298095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.298140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.298420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.298461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.298673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.298686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.298838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.298851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.298969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.298982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.299171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.299184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.299343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.299356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.299594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.299634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.299768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.299807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.300040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.300079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.300314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.300327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.300516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.300529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.300767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.300806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.300994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.301034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.301242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.301284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.301514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.301552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.301697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.301736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.301960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.301999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.302313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.302326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.302396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.302408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.302566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.302579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.302681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.302694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.302928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.302941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.303101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.303114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.303322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.303363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.303566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.303605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.303861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.303900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.304130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.304143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.304311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.304324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.304395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.304407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.304571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.304584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.304837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.304850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.304972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.304985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.391 qpair failed and we were unable to recover it. 00:38:37.391 [2024-07-10 23:42:46.305095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.391 [2024-07-10 23:42:46.305108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.305265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.305279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.305399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.305415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.305609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.305622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.305726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.305739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.305968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.305981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.306224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.306237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.306354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.306369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.306542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.306555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.306657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.306670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.306753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.306766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.306837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.306849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.306995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.307007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.307184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.307197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.307368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.307381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.307547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.307560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.307664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.307677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.307833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.307846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.308030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.308043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.308136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.308148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.308247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.308263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.308522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.308563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.308761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.308801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.309046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.309090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.309289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.309307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.309467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.309506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.309670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.309709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.309931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.309971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.310171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.310185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.310364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.310377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.310587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.310626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.310905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.310944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.311218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.311231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.311346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.311359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.311597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.311610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.311754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.311767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.311992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.312005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.312106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.312120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.312231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.312244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.312413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.312426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.312585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.392 [2024-07-10 23:42:46.312598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.392 qpair failed and we were unable to recover it. 00:38:37.392 [2024-07-10 23:42:46.312780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.312793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.312992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.313032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.313204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.313243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.313464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.313477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.313553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.313565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.313660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.313672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.313797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.313813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.313939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.313951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.314073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.314085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.314324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.314337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.314442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.314454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.314530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.314542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.314679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.314692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.314855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.314868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.315022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.315074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.315507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.315548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.315685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.315723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.315886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.315925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.316205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.316244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.316502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.316541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.316749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.316761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.316994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.317006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.317118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.317131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.317303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.317317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.317557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.317596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.317765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.317806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.318190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.318231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.318521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.318561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.318843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.318882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.319107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.319163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.319355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.319368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.319489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.319502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.319683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.319696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.319808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.319822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.319990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.320003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.320232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.320255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.320426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.320439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.320631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.320670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.320818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.320857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.321074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.321114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.321368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.393 [2024-07-10 23:42:46.321381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.393 qpair failed and we were unable to recover it. 00:38:37.393 [2024-07-10 23:42:46.321489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.321502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.321752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.321765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.321881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.321894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.322140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.322187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.322363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.322402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.322707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.322757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.322925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.322976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.323123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.323170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.323371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.323385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.323576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.323615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.323828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.323867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.324031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.324070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.324239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.324253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.324358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.324371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.324488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.324501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.324742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.324755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.324916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.324939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.325064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.325076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.325285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.325298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.325427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.325439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.325595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.325608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.325712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.325725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.325887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.325900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.326015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.326028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.326206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.326219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.326315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.326327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.326438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.326451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.326646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.326659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.326838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.326850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.327045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.327084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.327234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.327274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.327423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.327462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.327679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.327693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.327852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.327865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.327968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.327981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.328138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.328151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.328327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.328340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.328511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.328524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.328729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.328741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.328862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.394 [2024-07-10 23:42:46.328874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.394 qpair failed and we were unable to recover it. 00:38:37.394 [2024-07-10 23:42:46.328977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.328991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.329168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.329182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.329287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.329299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.329409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.329422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.329522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.329535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.329695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.329710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.329884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.329897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.330008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.330021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.330196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.330210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.330367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.330380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.330493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.330506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.330627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.330640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.330739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.330752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.330918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.330932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.331025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.331038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.331237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.331251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.331347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.331359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.331550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.331589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.331759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.331799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.332088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.332128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.332353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.332435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.332692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.332732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.332956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.332998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.333191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.333206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.333301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.333313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.333425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.333438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.333553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.333566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.333674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.333687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.333802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.333815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.333996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.334008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.334098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.334111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.334245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.334258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.334446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.334487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.334645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.334711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.335022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.335061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.395 [2024-07-10 23:42:46.335278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.395 [2024-07-10 23:42:46.335291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.395 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.335498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.335537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.335762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.335802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.336135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.336184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.336409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.336448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.336681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.336719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.336897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.336937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.337237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.337250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.337426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.337439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.337632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.337645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.337841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.337856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.337952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.337967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.338193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.338207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.338387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.338400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.338523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.338536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.338646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.338659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.338777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.338790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.338884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.338896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.339001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.339014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.339118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.339131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.339228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.339240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.339418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.339431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.339523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.339535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.339655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.339668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.339844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.339857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.340092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.340105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.340210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.340224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.340386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.340400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.340569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.340582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.340687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.340700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.340798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.340810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.340942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.340954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.341066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.341079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.341238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.341252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.341347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.341360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.341524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.341537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.341645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.341658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.341879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.341918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.342130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.342180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.342403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.396 [2024-07-10 23:42:46.342443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.396 qpair failed and we were unable to recover it. 00:38:37.396 [2024-07-10 23:42:46.342679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.342718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.342939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.342980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.343210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.343252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.343466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.343507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.343756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.343769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.343939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.343952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.344056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.344068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.344247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.344261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.344370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.344383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.344541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.344554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.344745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.344790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.344953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.344994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.345235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.345277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.345516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.345529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.345695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.345708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.345903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.345916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.346030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.346043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.346212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.346226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.346398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.346449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.346677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.346717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.346873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.346913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.347086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.347126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.347478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.347538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.347764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.347806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.348035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.348076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.348248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.348267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.348482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.348522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.348754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.348793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.349035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.349075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.349309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.349328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.349611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.349630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.349786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.349806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.349965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.349978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.350082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.350095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.350276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.350316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.350646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.350684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.350995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.351035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.351257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.351298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.351501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.351514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.351686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.351699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.351886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.397 [2024-07-10 23:42:46.351925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.397 qpair failed and we were unable to recover it. 00:38:37.397 [2024-07-10 23:42:46.352154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.352207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.352439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.352478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.352779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.352818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.353069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.353109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.353406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.353446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.353673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.353713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.354022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.354061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.354285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.354326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.354556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.354605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.354862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.354876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.355054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.355069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.355261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.355301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.355582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.355621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.355936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.355975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.356275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.356316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.356634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.356673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.356886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.356925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.357070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.357109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.357349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.357389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.357625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.357638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.357843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.357856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.358033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.358046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.358164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.358177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.358448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.358461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.358536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.358549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.358706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.358719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.358829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.358842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.358944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.358956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.359072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.359085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.359273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.359287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.359518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.359532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.359634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.359647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.359899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.359912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.360083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.360097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.360304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.360343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.360572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.360611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.360866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.360950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.361240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.361270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.361398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.361417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.361592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.361610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.361735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.398 [2024-07-10 23:42:46.361776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.398 qpair failed and we were unable to recover it. 00:38:37.398 [2024-07-10 23:42:46.361947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.361986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.362216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.362275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.362497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.362515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.362756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.362774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.362917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.362935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.363016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.363033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.363211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.363230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.363425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.363466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.363691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.363738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.363953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.363993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.364221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.364262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.364486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.364527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.364811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.364829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.365018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.365036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.365315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.365333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.365474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.365493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.365624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.365642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.365743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.365761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.365893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.365913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.366103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.366118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.366288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.366302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.366471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.366484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.366586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.366599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.366702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.366715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.366815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.366828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.366993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.367006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.367236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.367277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.367567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.367606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.367818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.367858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.368084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.368123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.368296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.368310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.368512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.368525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.368640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.368653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.368880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.368893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.369072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.369085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.369196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.369209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.369418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.369431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.369610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.369628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.369811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.369850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.370082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.399 [2024-07-10 23:42:46.370121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.399 qpair failed and we were unable to recover it. 00:38:37.399 [2024-07-10 23:42:46.370381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.370421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.370672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.370711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.370944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.370983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.371210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.371250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.371497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.371537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.371813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.371853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.372063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.372101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.372323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.372363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.372537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.372552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.372796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.372809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.372975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.373024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.373327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.373368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.373665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.373704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.373859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.373899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.374110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.374149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.374314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.374354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.374595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.374634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.374802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.374841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.375147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.375210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.375486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.375526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.375753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.375765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.376006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.376019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.376192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.376205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.376365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.376378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.376536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.376549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.376690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.376729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.376953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.376992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.377234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.377275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.377492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.377504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.377687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.377711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.377971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.377983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.378098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.378122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.378241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.378254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.378352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.378365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.378520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.378532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.400 qpair failed and we were unable to recover it. 00:38:37.400 [2024-07-10 23:42:46.378732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.400 [2024-07-10 23:42:46.378745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.378866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.378878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.378996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.379010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.379103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.379115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.379373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.379387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.379544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.379581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.379799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.379838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.380077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.380116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.380412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.380454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.380755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.380779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.380897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.380911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.381104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.381118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.381287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.381302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.381459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.381477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.381703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.381718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.381915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.381955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.382119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.382169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.382343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.382384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.382543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.382582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.382741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.382755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.382938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.382953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.383141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.383156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.383303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.383317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.383493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.383507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.383754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.383769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.384006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.384021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.384119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.384138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.384358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.384401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.384553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.384613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.384832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.384873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.385044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.385085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.385350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.385392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.385568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.385582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.385703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.385718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.385893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.385908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.386003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.386017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.386245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.386260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.386455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.386470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.386588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.386602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.386784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.386825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.387069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.387109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.401 qpair failed and we were unable to recover it. 00:38:37.401 [2024-07-10 23:42:46.387378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.401 [2024-07-10 23:42:46.387418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.387595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.387610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.387852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.387867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.388042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.388083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.388368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.388410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.388652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.388694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.388856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.388896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.389049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.389090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.389331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.389373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.389543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.389583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.389739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.389764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.389942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.389957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.390155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.390176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.390402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.390416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.390565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.390580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.390756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.390770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.390948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.390989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.391290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.391333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.391566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.391606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.391824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.391839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.392002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.392044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.392264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.392305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.392532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.392573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.392798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.392838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.393054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.393095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.393324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.393338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.393451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.393466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.393701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.393742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.393997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.394038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.394303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.394344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.394545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.394559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.394683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.394697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.394867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.394882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.395042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.395057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.395257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.395271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.395409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.395423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.395530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.395544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.395772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.395786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.395944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.395959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.396173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.396215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.396374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.396389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.402 qpair failed and we were unable to recover it. 00:38:37.402 [2024-07-10 23:42:46.396550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.402 [2024-07-10 23:42:46.396565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.396721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.396735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.396900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.396941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.397182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.397224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.397362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.397377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.397595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.397636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.397849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.397891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.398127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.398178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.398418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.398437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.398587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.398601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.398726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.398741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.398998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.399045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.399270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.399313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.399543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.399584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.399811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.399826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.399939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.399989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.400270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.400312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.400494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.400534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.400739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.400754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.400913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.400927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.401129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.401144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.401256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.401270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.401363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.401377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.401536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.401550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.401711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.401752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.401990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.402032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.402251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.402293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.402516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.402557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.402711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.402746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.402904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.402919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.403147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.403166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.403270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.403284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.403464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.403478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.403617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.403657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.403794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.403835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.404055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.404095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.404333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.404375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.404545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.404559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.404731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.404745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.404904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.404918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.405037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.403 [2024-07-10 23:42:46.405052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.403 qpair failed and we were unable to recover it. 00:38:37.403 [2024-07-10 23:42:46.405283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.405324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.405591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.405605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.405705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.405718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.405940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.405954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.406056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.406070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.406251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.406265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.406449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.406463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.406658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.406673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.406860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.406910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.407086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.407126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.407379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.407463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.407645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.407687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.407891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.407915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.408108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.408128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.408338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.408359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.408524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.408543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.408665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.408681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.408931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.408945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.409151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.409203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.409375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.409415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.409633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.409647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.409803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.409818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.409939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.409953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.410050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.410064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.410170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.410184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.404 [2024-07-10 23:42:46.410355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.404 [2024-07-10 23:42:46.410369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.404 qpair failed and we were unable to recover it. 00:38:37.686 [2024-07-10 23:42:46.410469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.686 [2024-07-10 23:42:46.410484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.686 qpair failed and we were unable to recover it. 00:38:37.686 [2024-07-10 23:42:46.410606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.686 [2024-07-10 23:42:46.410621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.686 qpair failed and we were unable to recover it. 00:38:37.686 [2024-07-10 23:42:46.410697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.686 [2024-07-10 23:42:46.410711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.686 qpair failed and we were unable to recover it. 00:38:37.686 [2024-07-10 23:42:46.410821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.686 [2024-07-10 23:42:46.410836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.686 qpair failed and we were unable to recover it. 00:38:37.686 [2024-07-10 23:42:46.410937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.686 [2024-07-10 23:42:46.410953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.686 qpair failed and we were unable to recover it. 00:38:37.686 [2024-07-10 23:42:46.411128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.686 [2024-07-10 23:42:46.411142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.686 qpair failed and we were unable to recover it. 00:38:37.686 [2024-07-10 23:42:46.411246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.686 [2024-07-10 23:42:46.411260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.686 qpair failed and we were unable to recover it. 00:38:37.686 [2024-07-10 23:42:46.411487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.686 [2024-07-10 23:42:46.411501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.686 qpair failed and we were unable to recover it. 00:38:37.686 [2024-07-10 23:42:46.411600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.686 [2024-07-10 23:42:46.411614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.686 qpair failed and we were unable to recover it. 00:38:37.686 [2024-07-10 23:42:46.411797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.686 [2024-07-10 23:42:46.411811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.686 qpair failed and we were unable to recover it. 00:38:37.686 [2024-07-10 23:42:46.411988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.686 [2024-07-10 23:42:46.412004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.686 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.412129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.412151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.412321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.412336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.412573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.412615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.412853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.412893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.413124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.413174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.413350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.413390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.413615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.413656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.413928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.413942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.414115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.414129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.414355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.414370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.414542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.414556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.414714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.414728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.414888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.414903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.415002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.415017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.415197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.415212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.415374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.415388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.415505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.415520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.415641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.415656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.415752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.415767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.415935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.415950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.416063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.416077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.416193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.416207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.416325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.416339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.416460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.416474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.416586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.416601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.416698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.416711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.416786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.416800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.416945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.416959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.417082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.417097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.417340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.417381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.417531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.417571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.417790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.417830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.418063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.418104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.418264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.418315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.418501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.418516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.418755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.418770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.418928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.418943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.419137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.419151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.419280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.419296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.419418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.687 [2024-07-10 23:42:46.419432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.687 qpair failed and we were unable to recover it. 00:38:37.687 [2024-07-10 23:42:46.419551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.419567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.419742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.419756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.419864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.419879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.420039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.420053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.420217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.420232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.420339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.420353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.420547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.420562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.420732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.420747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.420982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.421022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.421266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.421307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.421543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.421558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.421672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.421686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.421809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.421824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.421986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.422000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.422179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.422194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.422370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.422384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.422626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.422668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.422885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.422925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.423168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.423211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.423385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.423426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.423765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.423779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.423864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.423877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.424035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.424049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.424232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.424249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.424365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.424384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.424485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.424499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.424673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.424713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.424957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.424997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.425249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.425289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.425543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.425587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.425774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.425788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.425895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.425909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.426166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.426181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.426303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.426318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.426497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.426537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.426818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.426858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.427136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.427197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.427412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.427453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.427667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.427681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.427801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.427816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.428076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.688 [2024-07-10 23:42:46.428092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.688 qpair failed and we were unable to recover it. 00:38:37.688 [2024-07-10 23:42:46.428200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.428215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.428309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.428324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.428510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.428524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.428700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.428715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.428822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.428836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.428941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.428956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.429086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.429100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.429217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.429232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.429347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.429361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.429553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.429568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.429830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.429844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.429961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.429975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.430174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.430216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.430349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.430363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.430522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.430537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.430645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.430659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.430840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.430853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.431092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.431107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.431251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.431266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.431387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.431401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.431611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.431625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.431906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.431947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.432177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.432218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.432379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.432419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.432584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.432631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.432838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.432853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.433029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.433043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.433267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.433282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.433450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.433465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.433656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.433697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.433976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.434017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.434183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.434224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.434436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.434477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.434646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.434692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.434921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.434936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.435109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.435123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.435236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.435250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.435342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.435356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.435528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.435542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.435667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.435713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.435923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.689 [2024-07-10 23:42:46.435964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.689 qpair failed and we were unable to recover it. 00:38:37.689 [2024-07-10 23:42:46.436199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.436240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.436525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.436540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.436748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.436788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.436935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.436976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.437258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.437299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.437578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.437624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.437917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.437936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.438065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.438080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.438191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.438205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.438297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.438310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.438401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.438414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.438572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.438587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.438789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.438830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.439044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.439083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.439386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.439427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.439658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.439698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.439834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.439848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.440018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.440032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.440207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.440222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.440356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.440370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.440460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.440473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.440630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.440645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.440779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.440825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.441056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.441096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.441419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.441435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.441637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.441652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.441832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.441847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.442016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.442030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.442277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.442292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.442394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.442407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.442528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.442542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.442645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.442658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.442791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.442805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.442964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.442978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.443148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.443172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.690 [2024-07-10 23:42:46.443376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.690 [2024-07-10 23:42:46.443416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.690 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.443644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.443685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.443909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.443924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.444085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.444103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.444267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.444281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.444453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.444467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.444558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.444571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.444774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.444788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.444944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.444959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.445217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.445259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.445423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.445437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.445555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.445569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.445746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.445762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.445911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.445926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.446089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.446103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.446222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.446235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.446398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.446413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.446583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.446598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.446709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.446723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.446909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.446923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.447013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.447027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.447152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.447170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.447286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.447301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.447548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.447563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.447722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.447736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.447829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.447842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.447953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.447966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.448132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.448147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.448321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.448362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.448526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.448566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.448790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.448831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.449024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.449038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.449142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.449155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.449285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.449299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.449451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.449468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.449696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.449715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.449951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.449965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.450067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.450081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.450204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.450219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.691 qpair failed and we were unable to recover it. 00:38:37.691 [2024-07-10 23:42:46.450376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.691 [2024-07-10 23:42:46.450391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.450550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.450565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.450687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.450727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.450946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.450986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.451208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.451256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.451434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.451475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.451626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.451666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.451889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.451903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.452103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.452144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.452389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.452430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.452642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.452656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.452777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.452792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.452961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.452975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.453078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.453091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.453197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.453211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.453301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.453314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.453468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.453482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.453644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.453659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.453784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.453799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.453905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.453942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.454178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.454221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.454435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.454476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.454600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.454615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.454774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.454788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.454903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.454918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.455075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.455090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.455193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.455206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.455361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.455376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.455534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.455548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.455733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.455773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.455997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.456038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.456354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.456396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.456620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.456660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.456884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.456899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.457183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.457197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.457446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.457461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.457661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.457709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.457938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.457978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.458223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.458268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.458437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.458478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.458640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.692 [2024-07-10 23:42:46.458680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.692 qpair failed and we were unable to recover it. 00:38:37.692 [2024-07-10 23:42:46.458859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.458874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.459089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.459130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.459313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.459353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.459682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.459728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.459946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.459961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.460217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.460232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.460413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.460427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.460587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.460602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.460765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.460780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.460904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.460945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.461195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.461237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.461395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.461435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.461661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.461702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.461973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.462013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.462174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.462215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.462512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.462552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.462773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.462813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.462968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.463008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.463300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.463342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.463510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.463567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.463707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.463747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.464035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.464050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.464167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.464181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.464300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.464314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.464414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.464427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.464585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.464599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.464762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.464777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.465018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.465059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.465224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.465266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.465438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.465453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.465760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.465802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.465925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.465947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.466137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.466157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.466359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.466379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.466567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.466587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.466736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.466757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.467033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.467075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.467310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.467354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.467524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.467566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.467823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.467843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.468020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.693 [2024-07-10 23:42:46.468039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.693 qpair failed and we were unable to recover it. 00:38:37.693 [2024-07-10 23:42:46.468309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.468352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.468604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.468656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.468772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.468794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.468969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.469009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.469270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.469312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.469482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.469502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.469670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.469690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.469876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.469895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.470017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.470039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.470292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.470320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.470525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.470544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.470728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.470748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.470873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.470894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.471065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.471082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.471213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.471228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.471428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.471442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.471643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.471658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.471763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.471776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.471875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.471888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.472066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.472080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.472254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.472270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.472505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.472520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.472685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.472700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.472864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.472878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.473056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.473070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.473235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.473250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.473443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.473484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.473780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.473820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.474065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.474106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.474421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.474506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.474820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.474869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.475049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.475071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.475254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.475270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.475396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.475410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.475510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.475523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.475687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.475702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.475873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.475887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.476002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.476016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.476206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.476248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.476527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.476567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.476699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.476713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.694 qpair failed and we were unable to recover it. 00:38:37.694 [2024-07-10 23:42:46.476875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.694 [2024-07-10 23:42:46.476889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.477005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.477022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.477271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.477286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.477456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.477470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.477653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.477667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.477844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.477885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.478110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.478151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.478373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.478414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.478573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.478614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.478914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.478950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.479060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.479075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.479234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.479249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.479426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.479440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.479626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.479667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.479948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.479988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.480222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.480263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.480410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.480450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.480681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.480695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.480912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.480927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.481095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.481109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.481369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.481411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.481634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.481648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.481768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.481809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.482065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.482105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.482269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.482310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.482560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.482600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.482769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.482816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.483001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.483017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.483251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.483311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.483539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.483591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.483778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.483823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.484075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.484115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.484345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.484386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.484667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.484687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.484887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.484907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.485145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.485169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.695 qpair failed and we were unable to recover it. 00:38:37.695 [2024-07-10 23:42:46.485411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.695 [2024-07-10 23:42:46.485431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.485633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.485667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.485903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.485917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.486098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.486112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.486276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.486291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.486473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.486490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.486648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.486662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.486785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.486800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.486891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.486904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.487023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.487038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.487222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.487264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.487502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.487542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.487696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.487736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.487983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.487997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.488169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.488218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.488370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.488410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.488568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.488608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.488834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.488873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.489092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.489132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.489374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.489415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.489718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.489759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.489983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.489998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.490223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.490238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.490416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.490429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.490600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.490614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.490846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.490887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.491054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.491094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.491311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.491353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.491599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.491639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.491846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.491860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.492064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.492104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.492420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.492461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.492727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.492754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.492951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.492972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.493095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.493115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.493380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.493401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.493603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.493622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.493805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.493846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.494155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.494220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.494449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.494490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.494697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.696 [2024-07-10 23:42:46.494716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.696 qpair failed and we were unable to recover it. 00:38:37.696 [2024-07-10 23:42:46.494898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.494914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.495088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.495102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.495285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.495300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.495478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.495492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.495751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.495796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.496008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.496048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.496281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.496322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.496535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.496575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.496782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.496797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.496974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.497002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.497111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.497126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.497366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.497408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.497701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.497715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.497829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.497844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.498013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.498028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.498278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.498293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.498543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.498557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.498728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.498743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.498839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.498852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.498924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.498938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.499044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.499057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.499249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.499290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.499537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.499577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.499796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.499811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.500006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.500020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.500132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.500145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.500320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.500335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.500430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.500443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.500601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.500613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.500872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.500886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.501093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.501134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.501524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.501607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.502003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.502045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.502312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.502337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.502523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.502540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.502649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.502663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.502831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.502884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.503116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.503157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.503334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.503374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.503610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.503625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.503783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.697 [2024-07-10 23:42:46.503797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.697 qpair failed and we were unable to recover it. 00:38:37.697 [2024-07-10 23:42:46.503991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.504005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.504206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.504221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.504390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.504405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.504663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.504709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.504947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.504987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.505266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.505307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.505471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.505511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.505793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.505833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.506076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.506090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.506259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.506274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.506500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.506514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.506715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.506729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.506926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.506940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.507141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.507156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.507348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.507363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.507551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.507592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.507911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.507950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.508100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.508141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.508322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.508362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.508594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.508634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.508860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.508874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.508949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.509001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.509236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.509278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.509586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.509626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.509852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.509892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.509991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.510004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.510229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.510244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.510351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.510365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.510568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.510608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.510789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.510829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.511208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.511293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.511635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.511662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.511850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.511870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.512067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.512087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.512256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.512277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.512513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.512533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.512708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.512729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.512990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.513032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.513203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.513247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.513407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.513448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.513677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.698 [2024-07-10 23:42:46.513696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.698 qpair failed and we were unable to recover it. 00:38:37.698 [2024-07-10 23:42:46.513932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.513952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.514190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.514209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.514410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.514429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.514560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.514579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.514768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.514787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.515070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.515110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.515351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.515394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.515629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.515670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.515906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.515926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.516188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.516208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.516377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.516397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.516526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.516546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.516679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.516699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.516824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.516842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.517019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.517060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.517229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.517271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.517515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.517557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.517708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.517728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.517847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.517866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.518028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.518045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.518232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.518247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.518360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.518375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.518485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.518499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.518659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.518674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.518789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.518802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.519003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.519044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.519307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.519360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.519543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.519564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.519862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.519883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.520066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.520090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.520368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.520388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.520594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.520614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.520739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.520780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.521013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.521055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.521287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.521330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.699 qpair failed and we were unable to recover it. 00:38:37.699 [2024-07-10 23:42:46.521581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.699 [2024-07-10 23:42:46.521623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.521847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.521887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.522107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.522151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.522382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.522424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.522599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.522640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.522928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.522968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.523269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.523290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.523533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.523553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.523660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.523680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.523866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.523921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.524206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.524249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.524482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.524523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.524774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.524814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.525026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.525046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.525238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.525258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.525429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.525449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.525630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.525650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.525781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.525801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.525960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.526001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.526232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.526273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.526500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.526520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.526645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.526664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.526855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.526874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.527139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.527158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.527351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.527371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.527509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.527529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.527657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.527676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.527864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.527882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.528077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.528091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.528215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.528256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.528475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.528515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.528746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.528799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.528922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.528936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.529049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.529063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.529196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.529213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.529380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.529421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.529701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.529740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.529957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.529971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.530147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.530165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.530352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.530366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.530464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.700 [2024-07-10 23:42:46.530477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.700 qpair failed and we were unable to recover it. 00:38:37.700 [2024-07-10 23:42:46.530686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.530700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.530873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.530888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.531082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.531096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.531272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.531314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.531525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.531565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.531859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.531873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.531967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.531980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.532202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.532218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.532377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.532396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.532559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.532573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.532689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.532703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.532872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.532886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.533051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.533065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.533281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.533323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.533546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.533587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.533817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.533857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.534088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.534103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.534267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.534283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.534476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.534491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.534718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.534732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.534964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.534979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.535175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.535190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.535362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.535376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.535487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.535517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.535722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.535763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.536000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.536040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.536208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.536250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.536399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.536439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.536676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.536717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.536868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.536882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.537008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.537023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.537139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.537153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.537328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.537343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.537517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.537534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.537803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.537843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.538000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.538040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.538230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.538272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.538584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.538625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.538841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.538881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.539020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.539034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.539210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.701 [2024-07-10 23:42:46.539225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.701 qpair failed and we were unable to recover it. 00:38:37.701 [2024-07-10 23:42:46.539397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.539411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.539592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.539606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.539786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.539800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.539975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.540015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.540174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.540216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.540465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.540505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.540734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.540775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.540999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.541039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.541207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.541249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.541467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.541508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.541672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.541712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.542019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.542059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.542225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.542267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.542426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.542466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.542610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.542650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.542951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.542991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.543130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.543144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.543332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.543373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.543656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.543696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.543930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.543971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.544178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.544192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.544352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.544393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.544628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.544669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.544921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.544962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.545188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.545202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.545375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.545389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.545590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.545604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.545776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.545816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.545977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.546019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.546183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.546225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.546514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.546554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.546831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.546854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.547110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.547129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.547322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.547337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.547512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.547527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.547712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.547752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.547962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.548002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.548177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.548219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.548388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.548429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.548559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.548599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.548867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.548907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.702 qpair failed and we were unable to recover it. 00:38:37.702 [2024-07-10 23:42:46.549109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.702 [2024-07-10 23:42:46.549123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.549306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.549320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.549584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.549624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.549833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.549873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.550097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.550111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.550277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.550292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.550401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.550414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.550647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.550662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.550850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.550890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.551052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.551093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.551395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.551438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.551664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.551704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.551923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.551963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.552123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.552137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.552282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.552298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.552577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.552618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.552843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.552884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.553048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.553088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.553340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.553382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.553605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.553645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.553872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.553886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.553999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.554014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.554244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.554286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.554509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.554549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.554755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.554769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.554947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.554988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.555220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.555262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.555477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.555518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.555678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.555718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.555936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.555977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.556194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.556211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.556337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.556382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.556596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.556636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.556864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.556914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.557090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.557104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.557245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.557287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.703 [2024-07-10 23:42:46.557508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.703 [2024-07-10 23:42:46.557548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.703 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.557854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.557894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.558070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.558110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.558401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.558442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.558667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.558707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.558939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.558979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.559202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.559244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.559473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.559513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.559816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.559867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.559988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.560003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.560194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.560236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.560392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.560433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.560735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.560775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.560999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.561040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.561356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.561397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.561557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.561597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.561813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.561853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.562088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.562103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.562304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.562366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.562675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.562716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.562931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.562973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.563201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.563217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.563331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.563346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.563603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.563644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.563974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.564014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.564316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.564371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.564531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.564572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.564806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.564847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.565067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.565082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.565247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.565289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.565466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.565506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.565729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.565774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.565947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.565962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.566192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.566233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.566460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.566500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.566673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.566719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.566894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.566935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.567224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.567267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.567507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.567549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.567710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.567751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.567921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.567935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.704 qpair failed and we were unable to recover it. 00:38:37.704 [2024-07-10 23:42:46.568110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.704 [2024-07-10 23:42:46.568150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.568304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.568345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.568506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.568546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.568759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.568799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.569080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.569120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.569306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.569347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.569532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.569573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.569783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.569797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.570032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.570073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.570250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.570291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.570596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.570637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.570855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.570895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.571241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.571282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.571517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.571557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.571858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.571898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.572035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.572075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.572224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.572265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.572454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.572495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.572746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.572786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.573033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.573074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.573353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.573394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.573635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.573677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.573816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.573830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.574011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.574051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.574277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.574318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.574535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.574575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.574799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.574839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.575063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.575078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.575274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.575289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.575460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.575501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.575783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.575823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.575981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.576023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.576227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.576242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.576432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.576446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.576715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.576760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.576919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.576959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.577248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.577289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.577528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.577568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.577699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.577740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.577961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.577975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.578158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.578209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.705 [2024-07-10 23:42:46.578491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.705 [2024-07-10 23:42:46.578543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.705 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.578718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.578759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.579059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.579100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.579420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.579434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.579535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.579548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.579709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.579723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.579898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.579912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.580146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.580199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.580423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.580463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.580792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.580832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.581082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.581123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.581396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.581437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.581666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.581707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.581976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.581991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.582097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.582138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.582375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.582416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.582664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.582704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.582919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.582934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.583210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.583254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.583564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.583605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.583896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.583980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.584249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.584272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.584407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.584428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.584613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.584656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.584887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.584943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.585180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.585221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.585392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.585433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.585755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.585796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.586002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.586021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.586216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.586232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.586372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.586413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.586625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.586665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.586856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.586904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.587077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.587094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.587269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.587311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.587594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.587634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.587859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.587873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.587997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.588037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.588260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.588302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.588583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.588622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.588928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.588968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.706 [2024-07-10 23:42:46.589125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.706 [2024-07-10 23:42:46.589139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.706 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.589373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.589415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.589596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.589636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.589941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.589981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.590210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.590253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.590532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.590573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.590795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.590835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.591057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.591097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.591282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.591325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.591636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.591677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.591896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.591936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.592175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.592216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.592379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.592420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.592635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.592676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.592882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.592896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.593119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.593170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.593311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.593351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.593579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.593619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.593830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.593870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.594206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.594292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.594673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.594756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.594984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.595030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.595258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.595278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.595489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.595509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.595790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.595810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.595943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.595963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.596067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.596113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.596409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.596451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.596598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.596639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.596871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.596912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.597193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.597229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.597359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.597375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.597551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.597597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.597771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.597811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.598040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.598080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.598296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.598311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.598492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.598533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.707 qpair failed and we were unable to recover it. 00:38:37.707 [2024-07-10 23:42:46.598788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.707 [2024-07-10 23:42:46.598828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.599058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.599098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.599339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.599381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.599689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.599729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.599952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.599992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.600223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.600265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.600497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.600538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.600699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.600753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.601063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.601104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.601406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.601450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.601680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.601720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.601982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.601996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.602109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.602124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.602299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.602340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.602598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.602639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.602794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.602834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.602920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.602933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.603152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.603215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.603440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.603480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.603670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.603710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.603918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.603932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.604097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.604137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.604454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.604538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.604781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.604828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.605003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.605047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.605260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.605282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.605402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.605443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.605688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.605730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.605984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.606025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.606248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.606271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.606474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.606516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.606679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.606720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.607011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.607052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.607360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.607404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.607663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.607704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.607924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.607972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.608276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.608318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.608535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.608575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.608817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.608858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.609024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.708 [2024-07-10 23:42:46.609065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.708 qpair failed and we were unable to recover it. 00:38:37.708 [2024-07-10 23:42:46.609292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.609312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.609437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.609457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.609701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.609742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.610034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.610075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.610256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.610279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.610412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.610432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.610548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.610568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.610700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.610720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.610908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.610949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.611116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.611157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.611407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.611449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.611732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.611773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.611999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.612040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.612336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.612378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.612611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.612664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.612939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.612958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.613154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.613206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.613432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.613473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.613766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.613808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.613976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.614016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.614277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.614323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.614560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.614601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.614940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.615025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.615361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.615412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.615703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.615746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.615893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.615935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.616097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.616138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.616429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.616471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.616755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.616796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.617038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.617080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.617244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.617287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.617587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.617628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.617860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.617902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.618117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.618157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.618445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.618499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.618754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.618802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.619082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.619123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.619408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.619457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.619743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.619783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.619941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.619983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.620113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.620132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.709 qpair failed and we were unable to recover it. 00:38:37.709 [2024-07-10 23:42:46.620273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.709 [2024-07-10 23:42:46.620316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.620532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.620573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.620844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.620885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.621178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.621220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.621447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.621487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.621660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.621701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.621935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.621955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.622224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.622270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.622428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.622470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.622646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.622686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.622917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.622959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.623180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.623222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.623496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.623516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.623722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.623742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.623886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.623906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.624104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.624145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.624375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.624416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.624638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.624679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.624899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.624942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.625178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.625220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.625393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.625434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.625607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.625649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.625881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.625923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.626094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.626135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.626394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.626478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.626714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.626799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.626945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.626962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.627615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.627642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.627909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.627925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.628058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.628072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.628244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.628260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.628451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.628466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.628617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.628631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.628740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.628755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.628981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.628999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.629231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.629246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.629354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.629368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.629546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.629560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.629725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.629740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.629972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.629987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.630166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.630181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.710 [2024-07-10 23:42:46.630291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.710 [2024-07-10 23:42:46.630306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.710 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.630622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.630637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.630834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.630849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.630945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.630959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.631137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.631152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.631329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.631344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.631508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.631522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.631692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.631706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.631876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.631892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.631991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.632004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.632179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.632194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.632311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.632326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.632512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.632527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.632644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.632658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.632867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.632882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.632988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.633002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.633125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.633140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.633377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.633392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.633502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.633517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.633754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.633769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.633903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.633929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.634074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.634116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.634292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.634316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.634564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.634585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.634711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.634731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.634969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.634989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.635165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.635186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.635451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.635471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.635640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.635660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.635919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.635938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.636074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.636093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.636278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.636300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.636549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.636573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.636755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.636770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.636879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.636894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.637012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.637027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.637180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.637222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.637443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.637483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.637629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.637669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.637974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.638014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.638294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.638335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.711 [2024-07-10 23:42:46.638557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.711 [2024-07-10 23:42:46.638596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.711 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.638762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.638802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.639102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.639142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.639368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.639384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.639576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.639630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.639917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.639957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.640238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.640254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.640356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.640369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.640544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.640558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.640805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.640819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.640922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.640935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.641118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.641133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.641270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.641313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.641595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.641636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.641792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.641832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.642046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.642087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.642346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.642389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.642527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.642542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.642713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.642727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.642904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.642921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.643070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.643085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.643276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.643291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.643466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.643507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.643742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.643782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.643979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.643994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.644244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.644259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.644352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.644365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.644470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.644484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.644667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.644682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.644786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.644801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.644901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.644913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.645114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.645128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.645301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.645345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.645602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.645642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.645942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.645956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.646026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.646039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.712 [2024-07-10 23:42:46.646294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.712 [2024-07-10 23:42:46.646308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.712 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.646468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.646482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.646654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.646668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.646770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.646784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.646964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.647004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.647231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.647272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.647433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.647473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.647651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.647690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.647910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.647950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.648183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.648225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.648484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.648498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.648672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.648687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.648857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.648872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.649046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.649060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.649237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.649279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.649438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.649479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.649645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.649685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.649897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.649937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.650084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.650124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.650304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.650331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.650418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.650438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.650556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.650577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.650773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.650827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.650977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.651025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.651264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.651307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.651459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.651501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.651667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.651708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.651917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.651958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.652231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.652272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.652497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.652517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.652761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.652781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.652976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.653017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.653239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.653280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.653507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.653528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.653790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.653810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.654037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.654057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.654188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.654207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.654396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.654413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.654516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.654530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.654635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.654650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.654748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.654761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.713 [2024-07-10 23:42:46.654948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.713 [2024-07-10 23:42:46.654989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.713 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.655215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.655256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.655409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.655423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.655654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.655668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.655857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.655871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.655982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.655997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.656200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.656241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.656466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.656506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.656684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.656728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.656991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.657051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.657288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.657331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.657487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.657529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.657755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.657795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.657943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.657983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.658224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.658265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.658453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.658468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.658625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.658639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.658899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.658940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.659162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.659177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.659306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.659321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.659496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.659536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.659787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.659827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.660046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.660093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.660280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.660321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.660514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.660553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.660762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.660776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.660951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.660965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.661067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.661108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.661335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.661376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.661552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.661594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.661899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.661940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.662110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.662150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.662416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.662456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.662737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.662778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.662956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.662995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.663135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.663150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.663348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.663363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.663483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.663497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.663747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.663761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.663869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.663884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.664052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.664066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.664256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.714 [2024-07-10 23:42:46.664272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-07-10 23:42:46.664508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.664548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.664779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.664820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.664979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.665019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.665271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.665313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.665479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.665494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.665679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.665693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.665867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.665881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.665992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.666007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.666137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.666151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.666257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.666273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.666384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.666398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.666587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.666628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.666855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.666896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.667115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.667129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.667242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.667257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.667423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.667437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.667592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.667607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.667709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.667723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.667918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.667958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.668143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.668195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.668515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.668561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.668870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.668909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.669070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.669110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.669378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.669394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.669644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.669686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.669936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.669976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.670121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.670135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.670263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.670278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.670390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.670406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.670586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.670627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.670926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.670970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.671149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.671172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.671276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.671290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.671396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.671411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.671654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.671695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.671926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.671966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.672152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.672201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.672359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.672400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.672683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.672724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.672883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.672924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.673156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.715 [2024-07-10 23:42:46.673226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-07-10 23:42:46.673439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.673454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.673543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.673556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.673722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.673737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.673922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.673963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.674191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.674234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.674437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.674452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.674628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.674669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.674831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.674872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.675042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.675057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.675306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.675321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.675426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.675439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.675641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.675656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.675829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.675843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.676105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.676146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.676333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.676372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.676680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.676720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.676945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.676985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.677135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.677149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.677282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.677296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.677459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.677476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.677635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.677649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.677774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.677816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.678030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.678070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.678282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.678319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.678496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.678510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.678681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.678695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.678856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.678869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.679068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.679083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.679273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.679289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.679458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.679472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.679606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.679647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.679882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.679924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.680225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.680239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.680357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.680370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.680475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.680489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.680603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.716 [2024-07-10 23:42:46.680618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-07-10 23:42:46.680804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.680819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.680935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.680977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.681174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.681215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.681357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.681396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.681678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.681718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.681946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.681987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.682207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.682221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.682322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.682362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.682663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.682704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.682854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.682893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.683127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.683142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.683327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.683342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.683497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.683537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.683789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.683830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.684113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.684153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.684392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.684432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.684603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.684644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.684859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.684903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.685117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.685136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.685331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.685347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.685442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.685474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.685700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.685739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.686017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.686055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.686287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.686336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.686595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.686636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.686864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.686905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.687185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.687226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.687397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.687437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.687713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.687753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.687943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.687984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.688281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.688296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.688422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.688436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.688635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.688650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.688848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.688887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.689211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.689253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.689415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.689429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.689609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.689645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.689819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.689859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.690029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.690070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.690227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.690269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.690396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.717 [2024-07-10 23:42:46.690410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.717 qpair failed and we were unable to recover it. 00:38:37.717 [2024-07-10 23:42:46.690529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.690544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.690781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.690822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.691047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.691088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.691420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.691464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.691652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.691666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.691925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.691965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.692263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.692304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.692533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.692547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.692789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.692803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.692967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.692981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.693155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.693179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.693278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.693293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.693389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.693402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.693582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.693622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.693791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.693830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.694000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.694061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.694237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.694253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.694443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.694483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.694724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.694765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.695057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.695098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.695265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.695306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.695551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.695565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.695729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.695757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.695985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.696000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.696249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.696263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.696466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.696480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.696650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.696664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.696850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.696891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.697053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.697094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.697327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.697367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.697450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.697463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.697631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.697646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.697813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.697828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.697984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.698024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.698195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.698236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.698453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.698494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.698780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.698821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.699053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.699093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.699314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.699354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.699547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.699561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.699785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.718 [2024-07-10 23:42:46.699831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.718 qpair failed and we were unable to recover it. 00:38:37.718 [2024-07-10 23:42:46.700052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.700105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.700346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.700388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.700598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.700639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.700813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.700854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.701156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.701218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.701385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.701427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.701703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.701743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.701966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.702006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.702247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.702311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.702559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.702601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.702770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.702812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.702991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.703036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.703200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.703241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.703419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.703459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.703660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.703674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.703843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.703882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.704176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.704218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.704441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.704481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.704694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.704735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.704963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.705003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.705219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.705261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.705423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.705470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.705716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.705756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.705985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.706027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.706244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.706286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.706583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.706623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.706840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.706880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.707157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.707205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.707393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.707406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.707521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.707562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.707887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.707927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.708169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.708212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.708531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.708572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.708816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.708857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.709107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.709146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.709427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.709469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.709615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.709656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.709964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.710005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.710211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.710226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.710401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.710415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.710595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.710636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.719 qpair failed and we were unable to recover it. 00:38:37.719 [2024-07-10 23:42:46.710806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.719 [2024-07-10 23:42:46.710847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.711065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.711105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.711269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.711284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.711537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.711577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.711729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.711770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.712025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.712065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.712364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.712405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.712706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.712721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.712900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.712914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.713090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.713130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.713420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.713461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.713736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.713750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.713917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.713931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.714188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.714230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.714460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.714500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.714721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.714761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.715044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.715084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.715315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.715330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.715494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.715535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.715749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.715789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.716004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.716051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.716317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.716358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.716553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.716567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.716684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.716724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.716891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.716931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.717122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.717178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.717348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.717409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.717566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.717607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.717767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.717807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.717966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.718006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.718141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.718193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.718412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.718453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.718731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.718772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.719009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.719049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.719225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.719267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.719479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.719494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.720 qpair failed and we were unable to recover it. 00:38:37.720 [2024-07-10 23:42:46.719671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.720 [2024-07-10 23:42:46.719712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.720002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.720042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.720217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.720231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.720392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.720406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.720599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.720612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.720729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.720769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.721003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.721043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.721270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.721312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.721471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.721512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.721735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.721776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.722066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.722107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.722399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.722441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.722665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.722706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.722921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.722961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.723206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.723221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.723446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.723461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.723661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.723675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.723850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.723864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.724038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.724077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.724323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.724337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.724566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.724606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.724850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.724891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.725054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.725093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.725423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.725464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.725716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.725762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.725921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.725961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.726118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.726131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.726346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.726387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.726659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.726707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.726965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.727005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.727210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.727224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.727379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.727419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.727585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.727624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.727896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.727935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.728255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.728297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.728450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.728490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.728727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.728742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.728833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.728846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.729030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.729044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.729298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.729339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.729488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.729527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.721 qpair failed and we were unable to recover it. 00:38:37.721 [2024-07-10 23:42:46.729745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.721 [2024-07-10 23:42:46.729784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.722 qpair failed and we were unable to recover it. 00:38:37.722 [2024-07-10 23:42:46.730046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.722 [2024-07-10 23:42:46.730086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.722 qpair failed and we were unable to recover it. 00:38:37.722 [2024-07-10 23:42:46.730253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.722 [2024-07-10 23:42:46.730294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.722 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.730604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.730647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.730862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.730904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.731130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.731180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.731455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.731495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.731653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.731693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.731954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.731995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.732185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.732227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.732465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.732506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.732739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.732779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.733020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.733061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.733372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.733396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.733513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.733527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.733703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.733717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.733881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.733895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.734015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.734030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.734153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.734170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.734276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.734289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.734519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.734533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.734716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.734730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.734848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.734862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.735038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.735085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.735320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.735362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.735591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.735632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.735793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.735832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.736052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.736093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.736379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.736395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.736563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.996 [2024-07-10 23:42:46.736603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.996 qpair failed and we were unable to recover it. 00:38:37.996 [2024-07-10 23:42:46.736796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.736836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.737079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.737120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.737307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.737348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.737650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.737695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.738008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.738049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.738215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.738273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.738439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.738454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.738710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.738752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.738969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.739009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.739146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.739214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.739519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.739559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.739791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.739831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.740053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.740093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.740322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.740365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.740573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.740587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.740699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.740714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.740819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.740833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.741082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.741123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.741356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.741397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.741701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.741741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.742005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.742046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.742249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.742290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.742610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.742624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.742785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.742799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.742987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.743002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.743197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.743242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.743421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.743461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.743628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.997 [2024-07-10 23:42:46.743666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.997 qpair failed and we were unable to recover it. 00:38:37.997 [2024-07-10 23:42:46.743944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.743984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.744151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.744214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.744380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.744421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.744707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.744747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.744929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.744968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.745192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.745212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.745322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.745337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.745503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.745518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.745763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.745778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.745927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.745941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.746023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.746037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.746149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.746168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.746268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.746283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.746427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.746461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.746627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.746667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.746818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.746859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.747090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.747130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.747351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.747366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.747531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.747546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.747744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.747780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.747950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.747965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.748129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.748145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.748315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.748330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.748508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.748522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.748682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.748726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.749026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.749067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.749291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.749306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.749476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.749491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.749609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.749648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.749953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.998 [2024-07-10 23:42:46.749995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.998 qpair failed and we were unable to recover it. 00:38:37.998 [2024-07-10 23:42:46.750250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.750291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.750594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.750633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.750870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.750955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.751204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.751253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.751475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.751517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.751783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.751826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.752151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.752216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.752434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.752475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.752824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.752866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.753085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.753126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.753366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.753408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.753687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.753729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.753959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.754001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.754241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.754283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.754500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.754541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.754759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.754800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.754977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.755019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.755251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.755294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.755446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.755466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.755656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.755697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.755860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.755901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.756192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.756234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.756525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.756565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.756782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.756823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.757068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.757109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.757287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.757329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.757660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.757701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.757917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.757958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.758098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.758138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:37.999 qpair failed and we were unable to recover it. 00:38:37.999 [2024-07-10 23:42:46.758353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.999 [2024-07-10 23:42:46.758373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.758665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.758707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.759034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.759075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.759349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.759370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.759556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.759576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.759747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.759766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.759953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.759993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.760175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.760218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.760528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.760568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.760735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.760775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.761079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.761130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.761266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.761286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.761464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.761484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.761748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.761794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.762090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.762130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.762367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.762408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.762681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.762723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.762957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.762999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.763229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.763272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.763557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.763576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.763841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.763895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.764065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.764105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.764333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.764375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.764589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.764609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.764830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.764870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.765028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.765068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.765282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.765303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.765438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.765493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.765678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.765720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.000 [2024-07-10 23:42:46.765934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.000 [2024-07-10 23:42:46.765975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.000 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.766197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.766239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.766549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.766600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.766866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.766907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.767146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.767199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.767425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.767469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.767594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.767614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.767818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.767860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.768094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.768135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.768433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.768452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.768652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.768671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.768887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.768908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.769197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.769226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.769430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.769449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.769578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.769597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.769720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.769740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.769921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.769940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.770229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.770270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.770500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.770540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.770682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.770701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.770887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.770906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.771027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.771047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.771176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.771196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.771320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.771340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.771515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.771539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.771818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.001 [2024-07-10 23:42:46.771859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.001 qpair failed and we were unable to recover it. 00:38:38.001 [2024-07-10 23:42:46.772017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.772058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.772211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.772250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.772422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.772441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.772636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.772677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.772839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.772879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.773140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.773193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.773319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.773341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.773446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.773464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.773583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.773603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.773886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.773927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.774065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.774105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.774347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.774389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.774645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.774686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.774878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.774920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.775069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.775109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.775296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.775339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.775496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.775544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.775722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.775742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.775913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.775933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.776083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.776103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.776328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.776362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.776558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.776600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.776897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.776989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.777154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.777206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.777431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.777473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.777578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.777598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.777743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.777784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.778011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.002 [2024-07-10 23:42:46.778052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.002 qpair failed and we were unable to recover it. 00:38:38.002 [2024-07-10 23:42:46.778294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.778336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.778562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.778582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.778823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.778843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.778935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.778954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.779083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.779103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.779236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.779256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.779406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.779425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.779607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.779626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.779807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.779828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.780007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.780049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.780330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.780378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.780682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.780701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.780896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.780916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.781150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.781173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.781364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.781385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.781553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.781572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.781767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.781808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.781978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.782019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.782235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.782276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.782482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.782502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.782720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.782761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.782986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.783027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.783190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.783231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.783455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.783496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.783777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.783818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.784044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.784098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.784269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.784310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.784510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.784529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.784769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.784789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.003 [2024-07-10 23:42:46.784980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.003 [2024-07-10 23:42:46.785021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.003 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.785213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.785255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.785537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.785578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.785802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.785844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.786078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.786119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.786351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.786371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.786610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.786630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.786754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.786774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.787071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.787112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.787294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.787335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.787491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.787531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.787770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.787811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.788029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.788069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.788286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.788305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.788435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.788455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.788696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.788737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.788918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.788959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.789132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.789185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.789424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.789444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.789700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.789741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.789962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.790002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.790352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.790406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.790537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.790557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.790770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.790791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.790979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.791026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.791277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.791318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.791659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.791699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.791958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.791999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.792216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.792258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.004 [2024-07-10 23:42:46.792430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.004 [2024-07-10 23:42:46.792470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.004 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.792639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.792680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.792917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.792958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.793103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.793143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.793383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.793425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.793541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.793560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.793861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.793903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.794133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.794185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.794481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.794522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.794802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.794842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.795154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.795216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.795366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.795406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.795642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.795682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.795835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.795876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.796186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.796228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.796373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.796392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.796643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.796662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.796795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.796814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.796923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.796942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.797212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.797255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.797404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.797445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.797666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.797686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.797811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.797863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.798033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.798073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.798356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.798400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.798606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.798626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.798868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.798887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.799013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.799033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.799226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.799268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.799486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.799526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.799704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.799745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.800049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.800075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.800359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.800383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.800594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.800634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.005 qpair failed and we were unable to recover it. 00:38:38.005 [2024-07-10 23:42:46.800815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.005 [2024-07-10 23:42:46.800855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.801087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.801127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.801354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.801375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.801651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.801670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.801779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.801799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.801973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.801992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.802171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.802191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.802334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.802375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.802621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.802662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.802897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.802938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.803224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.803266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.803473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.803492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.803692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.803713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.803905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.803925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.804096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.804116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.804328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.804347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.804538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.804559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.804766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.804788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.805051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.805091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.805255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.805297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.805582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.805623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.805799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.805840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.806124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.806172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.806367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.806408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.806699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.806740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.807060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.807101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.807339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.807381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.807634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.807675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.807968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.808009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.808239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.808281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.808513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.808553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.808851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.808891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.809114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.809156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.809394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.809435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.006 [2024-07-10 23:42:46.809686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.006 [2024-07-10 23:42:46.809727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.006 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.809899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.809919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.810034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.810054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.810314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.810334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.810550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.810572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.810757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.810777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.810907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.810927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.811199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.811241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.811433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.811473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.811710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.811751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.812079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.812120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.812366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.812408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.812579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.812598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.812786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.812827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.813119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.813170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.813395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.813436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.813652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.813697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.813881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.813901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.814077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.814097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.814271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.814314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.814532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.814573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.814851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.814892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.815206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.815249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.815418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.815458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.815670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.815710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.815916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.815936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.816136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.816157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.816465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.816527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.816705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.007 [2024-07-10 23:42:46.816746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.007 qpair failed and we were unable to recover it. 00:38:38.007 [2024-07-10 23:42:46.817078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.817119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.817361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.817403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.817583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.817623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.817853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.817872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.818058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.818078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.818382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.818425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.818662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.818703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.818914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.818954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.819185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.819227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.819458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.819478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.819602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.819642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.819859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.819900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.820059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.820099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.820332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.820375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.820619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.820638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.820924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.820970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.821260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.821302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.821475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.821516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.821759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.821803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.822036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.822076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.822300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.822342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.822645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.822664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.822833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.822852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.822971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.823012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.823231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.823273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.823512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.823553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.823761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.823780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.823950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.823968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.824171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.824190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.824302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.824322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.824591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.824632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.824948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.824988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.825220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.825262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.825483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.825503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.825744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.825785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.826027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.826067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.826277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.008 [2024-07-10 23:42:46.826319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.008 qpair failed and we were unable to recover it. 00:38:38.008 [2024-07-10 23:42:46.826539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.826579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.826788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.826807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.827005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.827046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.827269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.827310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.827529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.827550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.827683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.827703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.827888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.827907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.828032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.828052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.828259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.828301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.828537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.828577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.828811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.828851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.829103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.829143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.829466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.829507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.829807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.829848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.830130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.830180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.830502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.830543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.830828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.830869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.831107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.831148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.831326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.831373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.831511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.831552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.831871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.831890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.832062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.832082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.832228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.832248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.832459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.832499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.832813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.832854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.833026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.833079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.833378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.833420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.833558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.833597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.009 [2024-07-10 23:42:46.833818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.009 [2024-07-10 23:42:46.833838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.009 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.833971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.834011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.834233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.834274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.834467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.834506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.834726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.834745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.834926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.834967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.835273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.835314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.835548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.835589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.835822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.835843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.836036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.836055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.836240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.836260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.836472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.836491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.836665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.836684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.836866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.836906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.837156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.837204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.837374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.837414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.837660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.837700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.837932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.838016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.838348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.838400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.838549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.838569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.838741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.838761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.839014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.839035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.839191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.839211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.839478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.839498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.839688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.839709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.839893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.839913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.840022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.840042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.840177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.840238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.840532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.840574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.840885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.840926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.841109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.841157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.010 qpair failed and we were unable to recover it. 00:38:38.010 [2024-07-10 23:42:46.841356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.010 [2024-07-10 23:42:46.841398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.841658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.841678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.841915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.841944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.842047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.842067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.842250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.842294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.842526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.842567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.842846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.842887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.843037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.843078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.843366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.843416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.843587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.843607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.843869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.843910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.844122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.844189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.844428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.844475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.844665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.844685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.844790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.844809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.844992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.845033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.845343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.845386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.845612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.845632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.845783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.845824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.846062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.846103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.846367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.846411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.846600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.846620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.846829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.846870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.847039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.847081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.847275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.847318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.847547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.847568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.847776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.847818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.848050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.848082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.848225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.848248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.011 qpair failed and we were unable to recover it. 00:38:38.011 [2024-07-10 23:42:46.848360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.011 [2024-07-10 23:42:46.848380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.848487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.848506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.848626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.848645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.848879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.848898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.849083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.849102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.849295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.849313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.849461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.849504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.849734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.849776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.850002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.850044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.850325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.850370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.850561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.850612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.850776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.850790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.851040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.851054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.851243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.851286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.851494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.851535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.851826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.851866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.852100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.852141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.852368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.852409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.852565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.852615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.852822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.852836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.852957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.852971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.853183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.853226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.853477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.853517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.853744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.853785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.853959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.854000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.854301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.854344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.854405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:38.012 [2024-07-10 23:42:46.854667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.854717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.854945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.854964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.012 qpair failed and we were unable to recover it. 00:38:38.012 [2024-07-10 23:42:46.855157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.012 [2024-07-10 23:42:46.855182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.855361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.855380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.855601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.855643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.855869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.855910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.856135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.856196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.856424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.856465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.856602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.856622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.856708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.856726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.856972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.857014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.857260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.857303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.857532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.857573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.857802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.857843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.858067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.858110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.858371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.858413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.858633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.858675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.858956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.858997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.859151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.859202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.859394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.859435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.859709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.859729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.859861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.859880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.860080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.860123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.860299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.860344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.860583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.860631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.860935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.860975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.861204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.861245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.861498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.861539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.861780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.861820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.862122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.862176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.862349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.862390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.013 [2024-07-10 23:42:46.862620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.013 [2024-07-10 23:42:46.862661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.013 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.862833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.862873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.863090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.863131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.863368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.863408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.863663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.863710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.864001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.864051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.864357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.864409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.864790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.864832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.864979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.865021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.865276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.865318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.865543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.865563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.865766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.865786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.865990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.866010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.866210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.866230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.866491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.866541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.866725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.866766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.867025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.867066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.867279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.867320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.867548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.867598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.867788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.867809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.867985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.868004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.868125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.868152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.868339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.868355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.868558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.868573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.868736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.868751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.868933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.868973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.869224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.869267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.869497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.869538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.869789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.869829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.869989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.870029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.870257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.870299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.870599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.870614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.870811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.870826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.870938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.870951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.014 [2024-07-10 23:42:46.871058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.014 [2024-07-10 23:42:46.871071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.014 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.871181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.871221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.871523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.871564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.871903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.871951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.872123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.872185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.872430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.872471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.872746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.872760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.872880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.872894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.873089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.873130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.873390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.873432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.873735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.873776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.874023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.874063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.874223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.874274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.874493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.874534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.874752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.874767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.874937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.874951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.875077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.875091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.875258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.875300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.875455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.875496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.875634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.875673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.875980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.875995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.876168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.876182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.876294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.876309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.876447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.876462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.876593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.876635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.876850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.876891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.877121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.877171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.877338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.877379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.877608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.877656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.877825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.877840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.877965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.877979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.878172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.878224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.878348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.878389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.878559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.878599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.878763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.878803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.879112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.879152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.879388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.879430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.879732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.015 [2024-07-10 23:42:46.879783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.015 qpair failed and we were unable to recover it. 00:38:38.015 [2024-07-10 23:42:46.880016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.880030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.880288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.880303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.880463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.880477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.880584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.880597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.880851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.880866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.881023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.881038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.881194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.881215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.881390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.881405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.881501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.881514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.881648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.881662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.881781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.881796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.881901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.881914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.882081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.882095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.882202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.882217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.882393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.882410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.882594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.882608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.882781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.882797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.882991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.883006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.883183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.883198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.883295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.883308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.883396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.883409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.883612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.883627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.883771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.883784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.883866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.883879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.883969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.883982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.884145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.884164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.884327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.884342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.884429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.884442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.884607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.884622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.884735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.884751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.884874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.884926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.885149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.885214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.885448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.885489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.885717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.885757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.885994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.886009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.886205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.886221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.886477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.886517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.886743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.886784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.887068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.887109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.887360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.887401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.016 [2024-07-10 23:42:46.887616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.016 [2024-07-10 23:42:46.887655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.016 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.887937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.887951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.888178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.888193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.888288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.888301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.888470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.888484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.888631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.888646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.888853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.888895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.889113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.889154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.889320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.889362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.889643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.889684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.890047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.890062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.890312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.890327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.890428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.890443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.890618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.890632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.890873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.890891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.891036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.891050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.891253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.891268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.891436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.891476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.891639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.891680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.891852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.891866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.892096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.892111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.892216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.892230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.892401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.892442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.892673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.892713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.892947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.892989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.893194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.893236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.893456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.893497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.893745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.893793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.894023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.894039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.894207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.894221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.894401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.894415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.894593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.894608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.894785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.894825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.895133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.895183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.895351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.895392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.895549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.017 [2024-07-10 23:42:46.895564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.017 qpair failed and we were unable to recover it. 00:38:38.017 [2024-07-10 23:42:46.895723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.895738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.895919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.895934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.896145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.896193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.896421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.896461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.896676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.896691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.896925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.896966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.897135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.897196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.897410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.897450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.897684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.897724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.897944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.897958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.898124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.898138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.898300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.898315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.898424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.898439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.898615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.898629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.898884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.898925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.899171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.899213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.899449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.899490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.899738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.899782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.899931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.899988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.900199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.900215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.900396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.900437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.900660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.900700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.900914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.900954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.901108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.901148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.901440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.901481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.901622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.901635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.901884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.901899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.902081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.902095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.902289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.902331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.902507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.902547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.902781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.902833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.902915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.902928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.903088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.903103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.903333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.903348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.903512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.903526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.903630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.903644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.903799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.903815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.904016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.904030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.904182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.904226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.904447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.904486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.904714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.018 [2024-07-10 23:42:46.904755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.018 qpair failed and we were unable to recover it. 00:38:38.018 [2024-07-10 23:42:46.904978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.904992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.905112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.905126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.905228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.905241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.905348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.905362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.905475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.905488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.905741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.905756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.905995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.906023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.906188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.906230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.906392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.906433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.906680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.906694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.906937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.906951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.907023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.907036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.907220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.907235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.907406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.907446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.907665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.907706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.907885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.907925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.908156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.908207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.908426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.908472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.908710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.908751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.908905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.908919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.909195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.909237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.909460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.909500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.909661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.909675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.909771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.909785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.909983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.909997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.910224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.910238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.910346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.910360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.910523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.910539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.910643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.910655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.910824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.910839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.911011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.911025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.911202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.911217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.911329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.911344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.911545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.911586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.911755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.911794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.912022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.912062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.912214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.912255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.912402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.912442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.912633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.912675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.912833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.912848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.913065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.019 [2024-07-10 23:42:46.913108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.019 qpair failed and we were unable to recover it. 00:38:38.019 [2024-07-10 23:42:46.913279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.913332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.913545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.913585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.913887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.913927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.914080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.914095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.914203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.914217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.914411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.914426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.914588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.914602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.914709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.914725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.914926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.914965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.915197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.915239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.915469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.915510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.915745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.915786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.916104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.916118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.916375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.916390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.916563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.916577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.916824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.916865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.917111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.917157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.917396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.917437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.917595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.917636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.917862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.917876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.918054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.918069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.918297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.918313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.918490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.918523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.918749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.918789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.919003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.919043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.919180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.919195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.919285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.919298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.919405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.919418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.919598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.919613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.919718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.919731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.919843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.919858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.920014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.920028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.920172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.920188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.920290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.920305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.920557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.920572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.920696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.920710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.920801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.920814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.920933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.920948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.921106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.921120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.921235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.921250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.921369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.020 [2024-07-10 23:42:46.921383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.020 qpair failed and we were unable to recover it. 00:38:38.020 [2024-07-10 23:42:46.921588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.921602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.921719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.921735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.921903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.921919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.922025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.922039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.922201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.922216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.922375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.922389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.922480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.922494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.922722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.922737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.922991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.923005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.923175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.923190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.923363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.923377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.923625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.923640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.923819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.923833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.923937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.923950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.924113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.924128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.924288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.924304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.924503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.924518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.924693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.924707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.924934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.924949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.925074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.925088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.925192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.925205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.925301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.925320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.925437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.925452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.925636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.925655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.925749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.925762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.925924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.925939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.926056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.926071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.926244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.926260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.926435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.926450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.926653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.926669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.926772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.926785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.926884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.926897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.927171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.927186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.927363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.927404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.927622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.927662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.927801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.927816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.927987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.928001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.021 [2024-07-10 23:42:46.928115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.021 [2024-07-10 23:42:46.928128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.021 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.928233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.928246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.928520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.928535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.928633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.928646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.928749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.928764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.928886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.928901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.929146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.929210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.929381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.929422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.929636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.929677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.929849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.929863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.930034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.930049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.930222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.930237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.930343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.930355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.930445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.930457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.930638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.930653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.930822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.930836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.931010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.931052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.931356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.931397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.931614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.931655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.931798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.931839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.931996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.932035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.932222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.932236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.932416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.932431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.932543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.932558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.932783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.932798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.932973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.932988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.933090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.933105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.933310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.933325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.933582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.933597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.933719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.933734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.933849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.933864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.933973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.933987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.934243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.934258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.934419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.934434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.934526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.934541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.934642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.022 [2024-07-10 23:42:46.934656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.022 qpair failed and we were unable to recover it. 00:38:38.022 [2024-07-10 23:42:46.934759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.934772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.934953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.934968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.935064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.935104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.935348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.935389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.935603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.935644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.935788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.935802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.935979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.935993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.936195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.936209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.936366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.936380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.936485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.936501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.936685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.936726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.936937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.936977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.937199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.937242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.937480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.937527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.937742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.937762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.937944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.937958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.938116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.938132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.938400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.938442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.938601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.938641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.938954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.938993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.939140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.939191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.939417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.939457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.939679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.939693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.939883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.939897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.940063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.940103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.940396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.940438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.940665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.940706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.940988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.941028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.941169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.941183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.941354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.941369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.941654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.941669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.941846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.941861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.941963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.941978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.942174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.942190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.942373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.942388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.942592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.942633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.942875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.942917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.943065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.943079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.943254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.943269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.943373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.943386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.023 [2024-07-10 23:42:46.943552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.023 [2024-07-10 23:42:46.943567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.023 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.943747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.943787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.944035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.944075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.944232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.944274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.944578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.944619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.944853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.944893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.945098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.945113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.945190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.945204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.945381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.945396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.945552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.945569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.945677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.945692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.945868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.945883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.946012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.946052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.946350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.946392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.946558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.946598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.946810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.946824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.946974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.946988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.947099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.947113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.947301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.947316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.947428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.947441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.947615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.947629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.947734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.947747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.947947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.947988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.948203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.948245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.948526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.948566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.948802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.948842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.949045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.949059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.949314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.949329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.949443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.949458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.949595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.949609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.949715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.949731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.949842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.949883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.950038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.950078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.950239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.950281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.950457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.950496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.950842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.950890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.951192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.951234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.951465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.951505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.951741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.951797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.951926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.951941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.952136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.952151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.024 qpair failed and we were unable to recover it. 00:38:38.024 [2024-07-10 23:42:46.952412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.024 [2024-07-10 23:42:46.952426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.952610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.952651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.952816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.952856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.953021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.953061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.953286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.953328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.953566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.953607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.953766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.953806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.954030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.954071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.954373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.954421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.954597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.954638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.954865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.954879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.955081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.955123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.955440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.955483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.955691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.955732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.956025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.956067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.956227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.956269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.956481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.956495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.956621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.956665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.956922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.956964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.957196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.957238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.957484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.957525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.957680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.957720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.957900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.957914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.958168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.958210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.958431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.958471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.958687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.958728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.958901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.958941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.959219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.959260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.959466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.959480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.959573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.959586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.959763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.959777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.960031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.960045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.960218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.960232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.960347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.960362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.960485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.960499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.960754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.960769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.960884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.960898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.960996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.961011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.961188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.961203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.961319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.961359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.961538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.025 [2024-07-10 23:42:46.961577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.025 qpair failed and we were unable to recover it. 00:38:38.025 [2024-07-10 23:42:46.961731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.961772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.962018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.962033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.962271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.962286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.962393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.962405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.962509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.962524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.962645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.962660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.962836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.962851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.963040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.963057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.963190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.963224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.963473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.963513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.963657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.963696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.963913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.963927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.964123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.964138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.964387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.964402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.964576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.964591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.964769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.964783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.964864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.964877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.964978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.964995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.965121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.965136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.965266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.965309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.965530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.965617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.965790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.965831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.966058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.966072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.966229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.966244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.966406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.966420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.966582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.966597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.966696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.966712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.966906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.966920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.967047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.967062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.967258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.967299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.967589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.967629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.967789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.967804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.968069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.968083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.968202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.968216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.968393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.026 [2024-07-10 23:42:46.968408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.026 qpair failed and we were unable to recover it. 00:38:38.026 [2024-07-10 23:42:46.968557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.968571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.968707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.968721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.968890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.968905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.969113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.969153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.969459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.969500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.969728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.969769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.970051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.970081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.970261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.970276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.970391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.970406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.970521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.970535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.970761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.970801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.971105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.971145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.971435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.971481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.971730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.971781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.971872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.971886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.972044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.972058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.972220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.972235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.972395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.972410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.972640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.972654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.972827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.972842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.972952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.972991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.973232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.973274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.973440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.973480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.973721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.973760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.973912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.973952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.974153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.974171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.974295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.974310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.974565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.974580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.974779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.974819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.974978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.975018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.975299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.975341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.975700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.975741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.975960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.975975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.976200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.976215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.976468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.976508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.976709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.976750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.976969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.977010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.977227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.977242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.977405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.977419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.027 [2024-07-10 23:42:46.977532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.027 [2024-07-10 23:42:46.977547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.027 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.977791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.977806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.977971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.977985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.978093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.978107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.978360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.978375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.978488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.978528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.978689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.978730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.978963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.979011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.979186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.979201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.979319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.979334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.979593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.979616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.979782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.979797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.979969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.980010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.980189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.980235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.980461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.980500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.980671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.980710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.980932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.980978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.981107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.981122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.981373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.981388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.981484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.981498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.981781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.981796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.981963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.981977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.982081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.982096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.982213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.982227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.982471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.982511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.982717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.982731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.982982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.982997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.983126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.983140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.983256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.983272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.983540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.983581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.983818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.983860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.984025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.984040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.984150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.984169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.984419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.984459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.984713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.984753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.984920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.984960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.985153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.985180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.985342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.985383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.985531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.985570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.985731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.985770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.986014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.986055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.028 [2024-07-10 23:42:46.986305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.028 [2024-07-10 23:42:46.986348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.028 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.986593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.986634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.986908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.986923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.987202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.987244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.987468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.987508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.987722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.987761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.987930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.987971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.988199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.988241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.988520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.988561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.988785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.988825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.988978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.988993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.989106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.989151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.989467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.989514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.989799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.989839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.990132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.990184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.990412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.990452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.990558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.990572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.990739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.990754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.990861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.990874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.990992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.991005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.991185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.991201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.991427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.991442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.991537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.991550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.991727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.991742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.991922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.991962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.992271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.992313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.992558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.992599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.992776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.992816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.993119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.993184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.993446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.993461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.993631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.993646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.993821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.993836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.994087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.994102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.994293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.994312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.994472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.994486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.994717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.994732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.994907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.994922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.995093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.995107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.995247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.995261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.995379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.995394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.995491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.029 [2024-07-10 23:42:46.995503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.029 qpair failed and we were unable to recover it. 00:38:38.029 [2024-07-10 23:42:46.995632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.995645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.995974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.996015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.996247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.996288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.996431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.996445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.996672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.996686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.996782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.996795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.997024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.997039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.997236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.997251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.997357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.997371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.997443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.997455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.997563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.997576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.997757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.997803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.997980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.997995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.998173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.998188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.998299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.998339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.998507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.998548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.998833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.998873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.999045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.999086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.999233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.999285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.999470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.999485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.999605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.999619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:46.999846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:46.999862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:47.000022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:47.000037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:47.000249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:47.000291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:47.000516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:47.000556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:47.000739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:47.000780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:47.001054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:47.001094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:47.001426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:47.001468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:47.001682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:47.001722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:47.001965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:47.002006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:47.002170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:47.002185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:47.002289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:47.002302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:47.002473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:47.002514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:47.002740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:47.002781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:47.003101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:47.003145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.030 qpair failed and we were unable to recover it. 00:38:38.030 [2024-07-10 23:42:47.003311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.030 [2024-07-10 23:42:47.003326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.003443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.003456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.003642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.003657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.003912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.003953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.004183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.004225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.004450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.004491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.004845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.004885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.005062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.005103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.005408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.005423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.005600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.005614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.005776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.005791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.005978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.005993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.006117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.006132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.006357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.006372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.006546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.006560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.006706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.006721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.006905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.006923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.007015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.007028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.007200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.007214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.007508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.007523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.007650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.007666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.007770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.007788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.007911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.007926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.008084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.008099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.008272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.008287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.008478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.008493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.008569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.008621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.008945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.008987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.009210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.009225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.009404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.009445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.009690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.009730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.009926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.009967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.010162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.010177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.010291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.010332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.010552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.010593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.010722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.010764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.010918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.010933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.011212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.011254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.011487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.011527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.011749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.011790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.011946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.011960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.012196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.012238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.012438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.012477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.012782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.031 [2024-07-10 23:42:47.012823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.031 qpair failed and we were unable to recover it. 00:38:38.031 [2024-07-10 23:42:47.012992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.013032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.013256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.013298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.013510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.013524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.013707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.013748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.013901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.013914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.014079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.014094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.014282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.014298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.014414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.014454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.014615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.014656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.014871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.014912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.015043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.015056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.015233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.015248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.015360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.015407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.015699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.015739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.015904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.015944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.016125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.016174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.016395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.016409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.016585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.016626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.016912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.016953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.017182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.017224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.017507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.017548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.017752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.017793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.018095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.018135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.018391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.018405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.018565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.018579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.018653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.018667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.018844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.018859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.019116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.019157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.019349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.019389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.019618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.019659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.019896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.019937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.020184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.020226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.020445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.020486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.020700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.020741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.021063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.021077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.021244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.021259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.021452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.021493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.021714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.021754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.021979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.021993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.022213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.022254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.022432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.022474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.022630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.022681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.022852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.022892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.023184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.023198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.023390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.032 [2024-07-10 23:42:47.023404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.032 qpair failed and we were unable to recover it. 00:38:38.032 [2024-07-10 23:42:47.023571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.023613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.023830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.023870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.024022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.024036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.024215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.024247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.024476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.024517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.024668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.024707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.024937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.024951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.025127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.025190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.025351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.025391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.025557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.025597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.025829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.025870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.026213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.026254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.026536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.026576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.026755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.026795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.026983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.027024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.027238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.027279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.027473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.027518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.027650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.027690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.027935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.027975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.028169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.028211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.028515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.028555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.028846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.028887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.029065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.029079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.029179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.029210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.029448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.029463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.029637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.029651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.029883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.029923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.030154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.030207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.030373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.030388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.030534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.030548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.030725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.030765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.030992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.031033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.031333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.031348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.031454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.031469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.031727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.031768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.031989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.032029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.032261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.032275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.032474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.032514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.032821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.032862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.033170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.033212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.033426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.033467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.033717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.033757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.033999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.034039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.034183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.034198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.033 qpair failed and we were unable to recover it. 00:38:38.033 [2024-07-10 23:42:47.034294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.033 [2024-07-10 23:42:47.034309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.034491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.034531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.034754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.034795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.034958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.035005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.035194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.035209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.035384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.035399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.035540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.035581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.035862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.035903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.036120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.036169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.036388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.036429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.036663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.036703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.036931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.036971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.037143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.037209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.037483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.037498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.037747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.037760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.037934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.037948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.038124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.038142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.038381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.038423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.038676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.038717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.038919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.038933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.039172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.039186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.039290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.039304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.039547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.039562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.039720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.039735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.039917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.039956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.040107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.040148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.040410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.040450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.040696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.040736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.041018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.041058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.041205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.041248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.041540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.041581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.041745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.041784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.042034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.042075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.042204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.042219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.042333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.042347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.042443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.042458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.042718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.042759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.042979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.043020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.043297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.043312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.043601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.043641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.043912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.043952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.034 [2024-07-10 23:42:47.044138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.034 [2024-07-10 23:42:47.044152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.034 qpair failed and we were unable to recover it. 00:38:38.035 [2024-07-10 23:42:47.044352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.035 [2024-07-10 23:42:47.044394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.035 qpair failed and we were unable to recover it. 00:38:38.035 [2024-07-10 23:42:47.044629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.035 [2024-07-10 23:42:47.044674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.035 qpair failed and we were unable to recover it. 00:38:38.035 [2024-07-10 23:42:47.044893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.035 [2024-07-10 23:42:47.044934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.035 qpair failed and we were unable to recover it. 00:38:38.035 [2024-07-10 23:42:47.045095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.035 [2024-07-10 23:42:47.045135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.035 qpair failed and we were unable to recover it. 00:38:38.035 [2024-07-10 23:42:47.045395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.035 [2024-07-10 23:42:47.045409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.035 qpair failed and we were unable to recover it. 00:38:38.035 [2024-07-10 23:42:47.045587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.035 [2024-07-10 23:42:47.045628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.035 qpair failed and we were unable to recover it. 00:38:38.035 [2024-07-10 23:42:47.045916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.035 [2024-07-10 23:42:47.045957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.035 qpair failed and we were unable to recover it. 00:38:38.035 [2024-07-10 23:42:47.046186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.035 [2024-07-10 23:42:47.046208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.035 qpair failed and we were unable to recover it. 00:38:38.035 [2024-07-10 23:42:47.046369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.035 [2024-07-10 23:42:47.046384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.035 qpair failed and we were unable to recover it. 00:38:38.035 [2024-07-10 23:42:47.046494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.035 [2024-07-10 23:42:47.046509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.035 qpair failed and we were unable to recover it. 00:38:38.035 [2024-07-10 23:42:47.046636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.035 [2024-07-10 23:42:47.046680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.035 qpair failed and we were unable to recover it. 00:38:38.035 [2024-07-10 23:42:47.046968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.035 [2024-07-10 23:42:47.047008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.035 qpair failed and we were unable to recover it. 00:38:38.035 [2024-07-10 23:42:47.047228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.035 [2024-07-10 23:42:47.047244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.035 qpair failed and we were unable to recover it. 00:38:38.035 [2024-07-10 23:42:47.047501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.035 [2024-07-10 23:42:47.047547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.035 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.047773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.047814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.048014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.048055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.048342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.048379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.048573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.048612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.048925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.048964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.049218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.049259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.049408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.049448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.049754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.049794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.049958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.049973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.050226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.050241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.050490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.050505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.050668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.050707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.050884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.050923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.051204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.051246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.051465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.051486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.051657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.051697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.051857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.051896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.052146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.052202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.052310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.052324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.308 [2024-07-10 23:42:47.052497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.308 [2024-07-10 23:42:47.052512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.308 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.052629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.052643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.052877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.052918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.053148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.053209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.053477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.053492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.053601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.053621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.053864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.053879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.054074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.054115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.054437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.054479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.054738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.054779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.055008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.055049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.055209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.055223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.055476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.055517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.055797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.055838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.056013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.056053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.056286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.056327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.056501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.056541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.056757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.056798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.057023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.057063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.057248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.057262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.057511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.057551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.057803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.057843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.058128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.058143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.058324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.058340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.058534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.058549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.058667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.058682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.058880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.058894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.059139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.059153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.059368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.059383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.059549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.059564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.059749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.059764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.059969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.059984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.060097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.060112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.060235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.060249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.060500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.060514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.060616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.060633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.060739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.060753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.060928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.060943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.061106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.061121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.061236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.061251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.061409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.061424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.309 qpair failed and we were unable to recover it. 00:38:38.309 [2024-07-10 23:42:47.061534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.309 [2024-07-10 23:42:47.061548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.061778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.061793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.061962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.061977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.062220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.062235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.062353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.062368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.062473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.062487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.062645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.062660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.062845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.062859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.063048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.063062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.063224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.063240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.063399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.063414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.063504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.063519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.063709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.063725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.063937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.063952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.064069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.064084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.064318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.064333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.064496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.064510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.064601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.064615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.064726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.064741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.064905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.064920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.065098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.065114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.065343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.065358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.065587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.065602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.065701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.065715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.065890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.065904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.066065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.066080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.066252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.066266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.066465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.066480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.066648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.066667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.066919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.066934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.067046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.067061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.067177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.067191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.067366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.067380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.067484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.067499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.067596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.067613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.067772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.067787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.067909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.067923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.068084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.068099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.068257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.068272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.068367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.068382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.068608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.068622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.310 qpair failed and we were unable to recover it. 00:38:38.310 [2024-07-10 23:42:47.068785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.310 [2024-07-10 23:42:47.068799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.068962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.068977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.069175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.069190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.069305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.069320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.069426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.069440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.069556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.069570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.069745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.069760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.069882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.069899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.070009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.070023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.070151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.070172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.070338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.070353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.070530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.070544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.070822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.070837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.070999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.071015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.071219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.071233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.071434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.071449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.071611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.071625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.071827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.071842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.072019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.072034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.072141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.072155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.072277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.072292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.072411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.072425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.072524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.072539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.072644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.072659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.072832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.072847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.072968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.072982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.073208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.073223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.073451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.073466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.073572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.073588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.073759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.073773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.074017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.074032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.074123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.074138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.074316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.074331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.074491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.074509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.074622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.074637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.074862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.074876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.074989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.075004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.075199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.075214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.075288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.075303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.075491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.075506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.075628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.311 [2024-07-10 23:42:47.075642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.311 qpair failed and we were unable to recover it. 00:38:38.311 [2024-07-10 23:42:47.075735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.075750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.075857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.075871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.076028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.076043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.076240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.076255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.076411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.076426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.076659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.076674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.076841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.076856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.077046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.077061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.077185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.077200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.077387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.077402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.077572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.077595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.077720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.077734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.077903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.077918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.078040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.078055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.078126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.078142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.078253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.078268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.078378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.078393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.078489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.078503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.078664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.078679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.078802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.078817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.078930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.078945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.079055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.079070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.079298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.079313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.079488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.079503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.079618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.079632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.079793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.079808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.080054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.080068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.080177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.080192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.080365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.080380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.080479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.080493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.080672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.080686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.080779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.080793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.080966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.080983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.081210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.312 [2024-07-10 23:42:47.081225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.312 qpair failed and we were unable to recover it. 00:38:38.312 [2024-07-10 23:42:47.081384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.081399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.081495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.081509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.081608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.081622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.081877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.081892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.082017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.082031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.082220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.082235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.082417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.082431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.082610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.082625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.082736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.082750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.082929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.082944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.083145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.083165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.083344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.083359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.083503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.083518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.083688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.083702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.083895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.083909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.084104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.084119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.084273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.084288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.084462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.084476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.084703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.084717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.084843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.084857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.084964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.084979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.085092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.085106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.085291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.085306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.085414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.085429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.085606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.085620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.085743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.085758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.085875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.085890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.085994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.086008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.086131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.086146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.086375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.086418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.086701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.086743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.086961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.087002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.087181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.087197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.087271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.087285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.087466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.087480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.087648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.087663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.087753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.087768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.087892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.087907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.088025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.088042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.088332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.088349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.088514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.313 [2024-07-10 23:42:47.088529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.313 qpair failed and we were unable to recover it. 00:38:38.313 [2024-07-10 23:42:47.088634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.088650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.088869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.088886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.089050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.089069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.089233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.089253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.089383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.089398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.089601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.089615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.089731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.089745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.089980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.089995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.090115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.090129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.090310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.090325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.090448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.090463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.090651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.090667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.090769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.090782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.090887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.090901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.091077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.091091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.091201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.091216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.091336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.091350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.091452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.091467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.091716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.091731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.091980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.091994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.092108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.092122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.092289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.092304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.092468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.092484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.092597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.092611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.092722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.092737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.092985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.093000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.093107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.093122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.093241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.093256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.093433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.093448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.093555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.093570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.093670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.093684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.093855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.093870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.094063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.094077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.094261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.094276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.094531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.094546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.094658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.094672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.094848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.094863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.094965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.094982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.095089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.095104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.095265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.095280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.095429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.314 [2024-07-10 23:42:47.095443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.314 qpair failed and we were unable to recover it. 00:38:38.314 [2024-07-10 23:42:47.095611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.095626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.095788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.095803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.095965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.095980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.096091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.096105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.096357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.096372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.096548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.096562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.096749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.096763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.096988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.097003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.097107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.097122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.097282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.097297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.097461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.097477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.097655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.097670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.097795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.097809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.097915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.097930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.098146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.098172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.098280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.098295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.098457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.098472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.098726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.098740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.098849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.098864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.098976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.098990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.099162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.099178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.099291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.099306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.099542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.099558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.099751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.099779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.100005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.100032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.100227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.100250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.100436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.100456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.100593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.100612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.100806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.100826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.101032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.101051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.101183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.101204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.101388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.101407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.101592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.101612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.101801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.101821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.101996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.102016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.102121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.102140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.102392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.102421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.102616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.315 [2024-07-10 23:42:47.102637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.315 qpair failed and we were unable to recover it. 00:38:38.315 [2024-07-10 23:42:47.102765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.102781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.102952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.102968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.103128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.103175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.103302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.103316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.103474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.103488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.103597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.103611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.103844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.103859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.104038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.104052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.104168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.104182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.104290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.104305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.104384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.104399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.104552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.104566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.104682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.104696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.104928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.104943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.105105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.105120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.105305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.105320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.105545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.105560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.105726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.105741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.105911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.105926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.106030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.106044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.106201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.106216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.106374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.106389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.106492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.106506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.106686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.106700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.106895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.106910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.107120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.107143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.107262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.107282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.107458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.107477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.107662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.107681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.107947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.107966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.108149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.108174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.108385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.108405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.108482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.108500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.108627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.108647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.108773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.108792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.316 [2024-07-10 23:42:47.108981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.316 [2024-07-10 23:42:47.109000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.316 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.109240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.109264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.109459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.109479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.109661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.109685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.109819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.109838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.109947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.109963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.110141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.110155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.110345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.110360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.110586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.110601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.110764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.110778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.110889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.110904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.111135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.111150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.111276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.111291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.111410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.111425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.111623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.111638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.111802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.111817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.111910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.111925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.112162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.112178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.112292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.112307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.112473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.112487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.112659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.112674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.112787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.112802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.112960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.112974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.113078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.113093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.113212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.113226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.113452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.113466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.113711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.113726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.113839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.317 [2024-07-10 23:42:47.113853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.317 qpair failed and we were unable to recover it. 00:38:38.317 [2024-07-10 23:42:47.114051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.114065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.114173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.114188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.114326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.114355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.114489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.114514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.114659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.114680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.114866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.114888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.115074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.115095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.115315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.115335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.115468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.115484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.115647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.115662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.115892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.115907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.116068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.116082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.116186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.116201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.116302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.116317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.116442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.116457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.116636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.116652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.116817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.116831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.116992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.117006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.117170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.117185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.117313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.117331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.117570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.117584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.117817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.117832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.118087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.118101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.118283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.118300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.118519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.118540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.118766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.118781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.118887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.118901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.119017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.119032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.119142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.119157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.119333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.119348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.119522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.318 [2024-07-10 23:42:47.119537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.318 qpair failed and we were unable to recover it. 00:38:38.318 [2024-07-10 23:42:47.119699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.119714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.119825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.119839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.119996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.120011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.120129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.120144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.120319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.120334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.120497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.120512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.120676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.120690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.120860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.120873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.120972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.120987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.121088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.121103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.121394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.121409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.121717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.121742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.121965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.121987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.122240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.122260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.122445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.122465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.122646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.122666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.122857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.122877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.123165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.123181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.123411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.123425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.123553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.123567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.123660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.123674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.123860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.123875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.123969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.123983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.124103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.124118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.124227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.124243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.124353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.124368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.124533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.124548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.124719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.124733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.124825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.124840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.319 [2024-07-10 23:42:47.125003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.319 [2024-07-10 23:42:47.125017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.319 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.125101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.125116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.125294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.125309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.125478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.125492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.125675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.125695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.125780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.125794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.125963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.125978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.126146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.126175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.126347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.126362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.126524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.126537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.126777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.126792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.126901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.126916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.127038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.127052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.127223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.127238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.127389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.127404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.127570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.127584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.127826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.127841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.128021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.128036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.128217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.128232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.128408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.128423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.128516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.128531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.128646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.128660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.128927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.128952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.129073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.129097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.129237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.129260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.129413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.129429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.129591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.129605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.129723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.129737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.129939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.129953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.130132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.320 [2024-07-10 23:42:47.130146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.320 qpair failed and we were unable to recover it. 00:38:38.320 [2024-07-10 23:42:47.130270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.130284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.130408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.130423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.130583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.130597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.130715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.130729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.130839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.130853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.131020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.131036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.131155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.131176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.131355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.131377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.131604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.131618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.131805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.131819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.131985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.132000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.132165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.132180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.132433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.132447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.132625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.132639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.132742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.132756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.132933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.132947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.133050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.133064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.133173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.133189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.133287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.133301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.133472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.133487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.133608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.133622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.133826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.133840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.133940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.133955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.134118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.134132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.134301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.134317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.134484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.134499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.134658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.134673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.134876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.134892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.135118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.135132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.135241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.135256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.135362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.135376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.321 [2024-07-10 23:42:47.135630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.321 [2024-07-10 23:42:47.135645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.321 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.135755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.135780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.136025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.136049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.136178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.136200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.136385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.136401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.136502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.136516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.136622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.136638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.136743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.136758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.136983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.136997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.137099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.137113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.137306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.137321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.137549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.137563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.137675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.137690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.137804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.137818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.137946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.137962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.138068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.138084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.138178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.138193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.138345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.138360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.138518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.138531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.138638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.138653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.138755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.138769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.138942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.138956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.139182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.139197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.139367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.139381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.139488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.139503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.139611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.139626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.139734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.139749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.139925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.139939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.322 [2024-07-10 23:42:47.140049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.322 [2024-07-10 23:42:47.140064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.322 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.140175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.140191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.140346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.140360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.140584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.140598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.140779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.140794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.141047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.141062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.141234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.141249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.141333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.141346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.141460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.141474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.141636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.141650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.141756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.141774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.141999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.142014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.142212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.142227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.142385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.142400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.142633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.142650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.142764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.142783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.142944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.142958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.143127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.143142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.143259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.143274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.143380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.143395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.143495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.143510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.143610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.143625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.143879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.143893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.144085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.144099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.144269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.144284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.144383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.144397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.144602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.144617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.144818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.144832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.144948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.144962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.145134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.145149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.145335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.145350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.145470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.145485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.145664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.145680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.145875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.145890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.146054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.146069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.146310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.146326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.146513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.146528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.146681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.146696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.146814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.146830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.146926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.146941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.147048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.147063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.323 qpair failed and we were unable to recover it. 00:38:38.323 [2024-07-10 23:42:47.147234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.323 [2024-07-10 23:42:47.147249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.147429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.147445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.147557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.147572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.147747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.147762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.147872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.147887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.148000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.148015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.148173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.148188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.148373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.148388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.148495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.148510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.148682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.148697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.148857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.148872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.149074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.149089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.149195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.149212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.149381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.149396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.149556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.149571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.149737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.149753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.149862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.149877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.150054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.150069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.150238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.150254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.150356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.150370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.150621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.150636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.150834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.150849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.150962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.150977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.151097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.151112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.151307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.151323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.151439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.151453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.151643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.151657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.151773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.151788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.151871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.151893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.152005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.152021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.152189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.152205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.152319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.152333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.152511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.152526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.152688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.152703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.152803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.152817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.153019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.153033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.153166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.153182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.153381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.153398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.153589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.153609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.153806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.153820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.153934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.153950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-07-10 23:42:47.154111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.324 [2024-07-10 23:42:47.154126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.154208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.154224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.154450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.154465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.154693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.154708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.154893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.154908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.155020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.155035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.155219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.155234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.155424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.155439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.155542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.155557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.155691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.155706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.155960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.155974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.156095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.156112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.156339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.156354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.156463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.156478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.156653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.156667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.156783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.156798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.156920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.156935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.157047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.157062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.157262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.157277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.157439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.157453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.157624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.157639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.157753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.157770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.158022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.158037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.158150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.158176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.158353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.158367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.158459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.158474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.158574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.158589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.158697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.158712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.158899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.158914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.159076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.159091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.159168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.159183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.159292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.159307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.159505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.159519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.159705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.159720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.159836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.159851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.159940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.159955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.160128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.160143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.160323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.160338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.160434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.160449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-07-10 23:42:47.160618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.325 [2024-07-10 23:42:47.160633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.160885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.160900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.161141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.161156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.161396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.161411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.161533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.161547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.161785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.161800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.161917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.161932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.162058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.162073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.162174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.162190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.162294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.162308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.162490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.162505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.162684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.162698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.162939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.162955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.163127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.163142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.163320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.163335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.163446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.163460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.163533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.163551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.163662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.163677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.163844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.163858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.163980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.163995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.164220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.164235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.164354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.164369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.164482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.164503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.164621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.164635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.164890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.164905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.165080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.165094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.165201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.165216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.165315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.165330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.165558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.165572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.165665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.165679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.165850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.165864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.166025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.166039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.166109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.166123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.166292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.166306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.166419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.166433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.166528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.166541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.166721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.326 [2024-07-10 23:42:47.166736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-07-10 23:42:47.166914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.166929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.167033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.167048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.167212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.167227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.167480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.167494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.167610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.167624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.167722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.167737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.167856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.167870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.168042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.168057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.168167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.168182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.168285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.168299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.168474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.168489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.168719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.168733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.168908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.168922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.169103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.169118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.169303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.169318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.169550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.169566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.169735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.169750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.169978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.169993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.170167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.170181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.170341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.170356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.170583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.170598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.170845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.170860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.171090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.171104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.171217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.171232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.171392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.171408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.171593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.171607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.171713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.171727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.171859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.171876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.172034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.172048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.172168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.172182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.172363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.172378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.172491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.172505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.172619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.172635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.172755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.172769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.172859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.172874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-07-10 23:42:47.173035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.327 [2024-07-10 23:42:47.173050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.173224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.173239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.173332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.173347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.173531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.173546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.173664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.173679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.173780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.173795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.173899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.173914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.174042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.174056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.174153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.174171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.174257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.174271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.174446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.174461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.174581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.174595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.174754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.174769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.174867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.174882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.174977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.175009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.175106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.175119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.175284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.175299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.175395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.175408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.175511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.175525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.175696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.175711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.175800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.175817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.176012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.176027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.176201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.176216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.176387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.176402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.176511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.176526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.176635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.176649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.176759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.176773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.176881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.176895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.176999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.177013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.177190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.177204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.177365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.177380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.177606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.177620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.177780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.177794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.177885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.177899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.178157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.178177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.178347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.178361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.178487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.178502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.178668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.178683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.178841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.178856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.178960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.178974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.179172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.179188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.328 qpair failed and we were unable to recover it. 00:38:38.328 [2024-07-10 23:42:47.179428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.328 [2024-07-10 23:42:47.179443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.179548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.179562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.179743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.179758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.179865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.179880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.179989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.180004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.180124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.180139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.180306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.180320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.180561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.180576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.180752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.180767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.180933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.180947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.181104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.181119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.181226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.181242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.181399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.181413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.181598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.181612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.181770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.181784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.181964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.181980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.182085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.182100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.182212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.182227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.182452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.182467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.182659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.182675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.182784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.182798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.182958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.182974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.183093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.183107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.183312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.183328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.183427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.183442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.183547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.183562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.183715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.183729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.183890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.183905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.184078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.184093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.184271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.184286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.184451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.184465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.184632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.184650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.184763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.184778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.184903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.184916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.185015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.185029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.185257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.185272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.185481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.185497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.185659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.185682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.185845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.185859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.186062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.186077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.186236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.186251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.329 [2024-07-10 23:42:47.186421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.329 [2024-07-10 23:42:47.186435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.329 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.186537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.186551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.186676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.186691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.186856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.186870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.187029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.187043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.187165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.187180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.187336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.187351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.187512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.187527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.187703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.187719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.187900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.187915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.188087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.188103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.188200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.188215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.188330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.188345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.188453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.188468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.188578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.188593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.188708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.188722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.188909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.188924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.189121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.189135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.189241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.189259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.189422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.189437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.189551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.189565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.189729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.189744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.189857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.189871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.190033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.190048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.190215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.190229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.190400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.190414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.190583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.190597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.190712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.190728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.190901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.190916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.191115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.191130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.191240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.191257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.191436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.191451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.191575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.191589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.191815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.191830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.191997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.192012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.192185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.192200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.192408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.192423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.192649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.192663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.192791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.192807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.192979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.330 [2024-07-10 23:42:47.192994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.330 qpair failed and we were unable to recover it. 00:38:38.330 [2024-07-10 23:42:47.193178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.193200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.193395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.193411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.193518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.193533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.193693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.193708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.193879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.193894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.194077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.194092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.194254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.194270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.194512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.194526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.194690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.194705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.194865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.194880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.195069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.195084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.195244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.195259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.195431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.195445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.195624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.195639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.195823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.195839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.195938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.195953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.196129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.196143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.196324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.196340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.196462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.196480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.196579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.196594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.196689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.196708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.196837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.196852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.197012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.197027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.197197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.197213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.197379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.197394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.197521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.197537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.197628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.197643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.197837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.197853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.197977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.197993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.198194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.198209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.198304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.198320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.198429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.198445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.198622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.198637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.198838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.198854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.199096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.199112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.331 [2024-07-10 23:42:47.199211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.331 [2024-07-10 23:42:47.199226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.331 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.199486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.199502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.199696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.199712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.199965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.199981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.200144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.200164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.200400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.200416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.200501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.200516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.200694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.200710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.200894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.200910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.201014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.201031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.201224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.201251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.201386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.201406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.201677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.201700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.201901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.201928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.202119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.202139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.202275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.202296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.202479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.202499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.202685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.202705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.202880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.202900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.203014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.203034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.203222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.203242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.203377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.203397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.203563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.203582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.203710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.203733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.203848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.203868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.204070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.204090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.204199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.204219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.204406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.204426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.204546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.204563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.204677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.204691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.204869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.204884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.205059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.205074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.205184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.205200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.205363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.205377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.205560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.205575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.205757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.205772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.205948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.205963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.206139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.206154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.206274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.206289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.206484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.206499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.206594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.206609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.206884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.332 [2024-07-10 23:42:47.206899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.332 qpair failed and we were unable to recover it. 00:38:38.332 [2024-07-10 23:42:47.207061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.207075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.207242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.207257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.207430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.207445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.207578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.207595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.207768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.207783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.207949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.207964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.208071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.208086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.208186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.208201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.208501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.208529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.208734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.208764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.208891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.208913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.209015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.209031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.209204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.209219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.209377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.209392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.209509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.209524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.209636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.209651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.209832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.209847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.210049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.210064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.210322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.210349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.210464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.210479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.210726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.210741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.210866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.210883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.211117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.211132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.211238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.211254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.211359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.211374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.211539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.211554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.211656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.211671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.211932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.211948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.212125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.212144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.212316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.212331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.212453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.212468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.212593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.212607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.212867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.212881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.213088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.213103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.213209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.213225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.213405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.213420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.213529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.213544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.213649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.213664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.213828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.213843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.214022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.214037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.214127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.333 [2024-07-10 23:42:47.214143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.333 qpair failed and we were unable to recover it. 00:38:38.333 [2024-07-10 23:42:47.214319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.214334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.214533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.214548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.214728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.214743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.214852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.214866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.214981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.214997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.215275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.215291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.215470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.215485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.215789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.215805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.215906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.215920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.216046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.216061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.216250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.216267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.216445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.216461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.216623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.216638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.216744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.216758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.216877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.216892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.217078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.217092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.217198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.217214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.217316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.217330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.217493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.217508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.217682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.217697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.217865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.217883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.217992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.218008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.218260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.218276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.218384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.218400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.218581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.218595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.218712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.218727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.218914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.218929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.219035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.219050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.219279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.219295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.219455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.219470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.219648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.219662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.219893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.219908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.220095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.220110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.220272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.220287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.220467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.220482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.220661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.220676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.220914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.220930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.221168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.221184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.221294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.221309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.221474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.221490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.221654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.334 [2024-07-10 23:42:47.221668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.334 qpair failed and we were unable to recover it. 00:38:38.334 [2024-07-10 23:42:47.221799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.221814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.221916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.221930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.222034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.222050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.222253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.222268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.222359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.222373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.222499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.222515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.222675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.222689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.222863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.222879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.223004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.223018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.223118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.223133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.223383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.223403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.223635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.223650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.223811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.223826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.223921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.223936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.224034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.224048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.224254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.224269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.224451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.224467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.224641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.224656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.224842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.224857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.224953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.224970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.225085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.225101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.225261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.225276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.225469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.225484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.225655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.225669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.225843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.225857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.225975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.225989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.226185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.226200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.226367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.226381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.226552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.226566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.226813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.226828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.227000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.227017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.227130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.227144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.227252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.227267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.227394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.335 [2024-07-10 23:42:47.227408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.335 qpair failed and we were unable to recover it. 00:38:38.335 [2024-07-10 23:42:47.227590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.227603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.227773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.227788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.227889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.227903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.228075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.228090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.228198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.228214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.228324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.228338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.228519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.228534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.228763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.228778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.228882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.228896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.228993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.229008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.229172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.229186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.229310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.229325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.229415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.229432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.229526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.229540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.229733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.229749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.229847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.229861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.230033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.230048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.230158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.230178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.230402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.230418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.230490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.230504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.230681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.230695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.230801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.230816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.231069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.231084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.231284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.231300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.231472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.231487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.231673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.231689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.231809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.231824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.231999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.232014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.232190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.232205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.232379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.232394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.232566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.232582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.232824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.232840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.233012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.233027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.233191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.233207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.233384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.233399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.233573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.233588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.233750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.233764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.233874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.233889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.234007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.234021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.234261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.234277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.234389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.336 [2024-07-10 23:42:47.234408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.336 qpair failed and we were unable to recover it. 00:38:38.336 [2024-07-10 23:42:47.234601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.234616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.234727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.234742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.234929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.234945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.235049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.235063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.235249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.235265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.235369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.235384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.235492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.235507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.235686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.235701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.235874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.235888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.235998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.236013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.236128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.236143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.236310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.236328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.236437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.236453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.236629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.236644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.236895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.236910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.237098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.237113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.237297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.237312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.237486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.237501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.237628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.237642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.237819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.237834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.237942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.237957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.238137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.238152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.238382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.238397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.238575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.238590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.238696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.238711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.238871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.238886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.239118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.239133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.239315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.239331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.239507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.239522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.239683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.239699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.239929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.239944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.240121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.240135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.240312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.240327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.240437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.240452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.240541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.240556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.240729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.240744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.240870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.240886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.240994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.241009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.241181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.241198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.241312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.241327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.241455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.241471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.337 qpair failed and we were unable to recover it. 00:38:38.337 [2024-07-10 23:42:47.241638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.337 [2024-07-10 23:42:47.241653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.241949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.241964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.242169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.242184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.242443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.242457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.242646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.242660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.242860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.242876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.243000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.243015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.243140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.243155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.243281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.243296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.243460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.243476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.243654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.243672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.243844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.243859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.243953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.243968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.244208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.244222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.244334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.244348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.244606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.244622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.244786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.244801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.245027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.245042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.245267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.245283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.245475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.245490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.245648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.245663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.245843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.245859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.246023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.246048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.246300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.246316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.246502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.246517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.246641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.246657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.246774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.246789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.246895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.246910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.247071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.247086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.247349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.247365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.247479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.247494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.247667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.247683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.247845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.247859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.247984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.247998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.248174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.248189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.248447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.248462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.248574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.248589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.248700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.248715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.248940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.248955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.249152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.249173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.249354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.249368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.338 [2024-07-10 23:42:47.249440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.338 [2024-07-10 23:42:47.249455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.338 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.249709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.249724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.249917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.249932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.250019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.250034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.250153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.250172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.250268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.250283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.250451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.250466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.250581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.250595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.250773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.250789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.250951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.250968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.251163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.251179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.251343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.251359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.251520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.251535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.251642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.251656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.251832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.251847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.252110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.252125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.252352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.252367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.252550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.252565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.252677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.252691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.252851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.252867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.253051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.253067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.253170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.253185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.253353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.253369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.253598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.253613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.253718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.253733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.253891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.253906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.254012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.254026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.254144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.254162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.254272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.254288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.254464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.254478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.254577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.254593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.254831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.254846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.255012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.255026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.255133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.255148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.255319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.255334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.255432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.255447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.255563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.255577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.255856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.255871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.255980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.255994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.256119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.256135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.256314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.256329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.339 [2024-07-10 23:42:47.256493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.339 [2024-07-10 23:42:47.256508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.339 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.256618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.256632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.256746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.256760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.256954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.256968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.257174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.257197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.257387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.257402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.257629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.257644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.257808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.257822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.258070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.258087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.258209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.258224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.258314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.258328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.258486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.258501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.258615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.258629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.258873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.258888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.259118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.259132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.259411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.259426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.259550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.259565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.259730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.259745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.259916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.259930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.260022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.260037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.260273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.260288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.260446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.260460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.260621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.260637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.260764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.260779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.260997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.261012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.261114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.261128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.261247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.261261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.261429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.261444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.261546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.261561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.261669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.261684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.261880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.261894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.262065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.262080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.262174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.262189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.262349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.262364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.262462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.262476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.262721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.340 [2024-07-10 23:42:47.262737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.340 qpair failed and we were unable to recover it. 00:38:38.340 [2024-07-10 23:42:47.262937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.262952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.263207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.263222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.263347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.263362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.263539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.263554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.263780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.263796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.263963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.263978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.264165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.264179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.264274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.264289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.264461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.264476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.264598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.264612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.264804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.264819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.264993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.265008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.265241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.265258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.265384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.265398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.265590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.265605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.265717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.265732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.265828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.265843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.266097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.266112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.266228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.266243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.266411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.266426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.266598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.266613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.266790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.266804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.267000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.267013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.267138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.267152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.267248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.267262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.267369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.267383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.267491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.267507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.267677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.267693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.267801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.267816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.267969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.267983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.268147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.268166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.268343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.268358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.268559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.268578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.268754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.268769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.268866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.268882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.269002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.269016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.269179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.269195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.269298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.269313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.269411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.269426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.269606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.269621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.269810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.269824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.341 qpair failed and we were unable to recover it. 00:38:38.341 [2024-07-10 23:42:47.269936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.341 [2024-07-10 23:42:47.269950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.270118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.270133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.270310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.270325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.270432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.270447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.270567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.270582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.270811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.270826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.271025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.271040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.271208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.271223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.271343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.271357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.271453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.271468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.271580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.271595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.271774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.271791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.271969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.271984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.272165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.272180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.272291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.272306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.272479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.272494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.272666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.272680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.272893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.272908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.273167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.273183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.273301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.273317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.273519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.273533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.273722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.273736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.273845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.273858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.273949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.273964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.274071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.274086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.274253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.274270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.274462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.274476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.274651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.274666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.274837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.274852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.274974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.274988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.275214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.275230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.275339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.275354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.275526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.275542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.275723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.275738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.275922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.275937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.276057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.276073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.276269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.276285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.276444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.276459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.276663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.276678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.276766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.276781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.276967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.276982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.342 qpair failed and we were unable to recover it. 00:38:38.342 [2024-07-10 23:42:47.277078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.342 [2024-07-10 23:42:47.277093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.277269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.277284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.277395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.277411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.277573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.277588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.277683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.277697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.277843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.277857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.278085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.278099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.278212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.278227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.278407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.278422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.278529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.278543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.278717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.278734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.278838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.278852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.279017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.279032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.279180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.279198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.279329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.279351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.279465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.279480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.279576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.279591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.279755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.279769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.279906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.279920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.280017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.280032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.280290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.280306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.280403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.280418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.280549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.280565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.280746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.280760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.280885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.280900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.281030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.281047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.281223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.281238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.281418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.281432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.281657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.281673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.281900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.281914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.282097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.282111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.282228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.282244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.282339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.282354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.282463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.282478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.282642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.282657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.282884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.282898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.283009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.283023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.283142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.283174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.283384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.283415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.283553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.283577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.283764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.283781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.343 [2024-07-10 23:42:47.283952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.343 [2024-07-10 23:42:47.283967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.343 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.284130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.284146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.284324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.284339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.284526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.284541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.284648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.284663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.284774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.284788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.284952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.284967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.285079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.285093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.285262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.285278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.285382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.285398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.285559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.285575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.285765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.285780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.285952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.285967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.286096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.286110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.286345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.286360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.286523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.286538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.286707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.286722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.286831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.286845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.287023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.287038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.287154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.287183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.287361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.287377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.287560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.287575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.287764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.287779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.287890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.287904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.288069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.288084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.288260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.288276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.288383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.288399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.288569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.288584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.288747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.288761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.288918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.288933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.289094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.289109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.289224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.289239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.289408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.289424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.289677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.289691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.289858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.289873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.290076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.290091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.290295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.344 [2024-07-10 23:42:47.290319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.344 qpair failed and we were unable to recover it. 00:38:38.344 [2024-07-10 23:42:47.290442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.290463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.290668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.290689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.290826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.290846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.291020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.291040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.291237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.291258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.291456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.291476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.291599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.291619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.291829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.291849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.292083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.292103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.292280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.292302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.292504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.292523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.292777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.292797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.292912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.292938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.293116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.293136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.293401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.293422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.293613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.293634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.293736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.293756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.293970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.293990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.294181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.294202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.294438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.294458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.294662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.294682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.294799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.294819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.295079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.295099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.295203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.295224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.295411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.295432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.295561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.295581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.295760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.295780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.295972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.295992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.296166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.296187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.296368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.296388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.296620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.296639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.296823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.296844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.297113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.297132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.297263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.297296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.297511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.297531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.297766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.297786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.298048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.298067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.298189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.298210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.298413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.298439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.298576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.298604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.298798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.345 [2024-07-10 23:42:47.298820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.345 qpair failed and we were unable to recover it. 00:38:38.345 [2024-07-10 23:42:47.298991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.299009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.299126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.299140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.299318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.299334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.299502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.299518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.299625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.299645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.299821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.299836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.299972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.299987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.300153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.300181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.300307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.300322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.300505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.300520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.300619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.300634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.300796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.300814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.300982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.300996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.301158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.301176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.301329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.301344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.301508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.301524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.301679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.301693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.301936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.301951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.302199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.302214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.302331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.302346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.302457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.302471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.302701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.302716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.302825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.302840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.302955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.302969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.303066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.303082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.303245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.303260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.303484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.303500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.303669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.303684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.303869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.303884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.303990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.304004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.304112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.304127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.304377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.304392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.304505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.304519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.304611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.304626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.304746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.304760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.305009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.305023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.305115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.305130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.305219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.305234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.305354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.305379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.305504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.305526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.305662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.346 [2024-07-10 23:42:47.305682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.346 qpair failed and we were unable to recover it. 00:38:38.346 [2024-07-10 23:42:47.305825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.305846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.305952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.305970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.306097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.306117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.306216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.306236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.306380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.306399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.306602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.306621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.306794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.306814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.307000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.307020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.307232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.307252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.307453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.307473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.307604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.307627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.307819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.307839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.308040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.308057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.308139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.308153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.308329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.308354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.308461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.308475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.308721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.308735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.308960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.308974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.309074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.309089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.309262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.309277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.309476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.309490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.309742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.309756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.309930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.309944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.310112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.310127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.310361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.310376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.310541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.310555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.310745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.310761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.310929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.310944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.311115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.311130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.311303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.311318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.311515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.311530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.311646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.311660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.311886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.311901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.312167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.312182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.312358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.312378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.312501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.312516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.312676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.347 [2024-07-10 23:42:47.312691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.347 qpair failed and we were unable to recover it. 00:38:38.347 [2024-07-10 23:42:47.312809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.312833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.313133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.313156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.313295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.313316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.313491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.313511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.313617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.313637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.313765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.313784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.314078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.314094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.314277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.314291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.314521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.314536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.314710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.314725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.314851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.314868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.314987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.315007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.315114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.315129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.315233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.315251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.315480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.315495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.315677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.315692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.315812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.315827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.315917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.315932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.316157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.316179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.316367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.316382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.316489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.316504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.316726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.316741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.316932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.316946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.317044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.317059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.317183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.317198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.317303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.317317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.317488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.317503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.317627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.317642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.317811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.317826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.317957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.317973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.318152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.318172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.318283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.318298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.318454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.318469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.318655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.318670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.318778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.348 [2024-07-10 23:42:47.318793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.348 qpair failed and we were unable to recover it. 00:38:38.348 [2024-07-10 23:42:47.318894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.318909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.319020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.319035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.319139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.319153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.319359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.319374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.319531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.319545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.319764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.319792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.319918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.319939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.320075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.320096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.320296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.320317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.320584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.320603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.320795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.320815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.320940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.320961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.321148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.321174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.321286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.321306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.321496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.321517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.321699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.321720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.321907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.321927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.322100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.322120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.322262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.322284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.322462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.322483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.322692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.322708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.322810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.322825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.323011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.323025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.323201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.323216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.323405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.323420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.323595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.323610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.323788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.323802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.323918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.323933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.324043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.324058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.324218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.324233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.324362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.324377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.324505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.324519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.324695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.324709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.324834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.324848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.325020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.325034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.325149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.325168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.325349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.325364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.325526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.325541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.325703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.325718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.325819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.325834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.325933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.349 [2024-07-10 23:42:47.325948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.349 qpair failed and we were unable to recover it. 00:38:38.349 [2024-07-10 23:42:47.326057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.326072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.326273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.326290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.326489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.326503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.326750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.326765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.326878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.326897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.327010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.327026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.327144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.327171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.327262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.327278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.327387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.327402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.327495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.327510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.327618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.327634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.327879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.327894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.328006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.328020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.328152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.328174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.328344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.328363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.328471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.328486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.328592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.328607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.328718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.328732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.328835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.328849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.329011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.329026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.329206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.329221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.329340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.329354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.329463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.329477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.329581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.329596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.329758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.329773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.329951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.329966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.330124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.330139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.330311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.330326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.330495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.330510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.330697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.330712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.330872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.330886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.331062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.331077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.331253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.331268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.331433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.331448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.331647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.331661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.331854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.331868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.332065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.332079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.332250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.332265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.332427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.332442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.332532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.332547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.332640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.350 [2024-07-10 23:42:47.332656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.350 qpair failed and we were unable to recover it. 00:38:38.350 [2024-07-10 23:42:47.332830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.332846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.332950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.332967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.333085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.333100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.333219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.333236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.333391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.333406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.333531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.333545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.333648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.333663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.333823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.333839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.333950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.333966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.334081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.334096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.334258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.334273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.334383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.334398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.334626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.334640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.334798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.334813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.334978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.334992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.335118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.335134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.335308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.335323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.335572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.335588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.335696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.335711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.335816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.335830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.335934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.335948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.336059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.336074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.336182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.336198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.336313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.336328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.336427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.336442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.336549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.336563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.336677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.336692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.336787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.336802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.336985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.337000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.337109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.337127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.337247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.337262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.337492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.337507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.337608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.337624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.337804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.337818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.337980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.337995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.338176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.338192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.338308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.338323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.338484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.338507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.338672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.338687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.338915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.338930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.339030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.351 [2024-07-10 23:42:47.339044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.351 qpair failed and we were unable to recover it. 00:38:38.351 [2024-07-10 23:42:47.339191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.339207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.339368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.339383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.339635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.339652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.339723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.339739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.339848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.339862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.340018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.340034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.340112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.340128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.340280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.340295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.340400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.340414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.340515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.340530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.340626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.340641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.340895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.340910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.341070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.341085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.341188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.341203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.341322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.341336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.341458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.341473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.341555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.341570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.341735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.341750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.341841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.341855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.341947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.341962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.342130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.342144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.342325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.342341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.342456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.342470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.342574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.342589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.342749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.342764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.342881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.342896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.342998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.343012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.343190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.343205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.343319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.343351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.343525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.343540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.343651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.343666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.343835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.343851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.343975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.343990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.344151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.344170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.344342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.344357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.344521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.352 [2024-07-10 23:42:47.344536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.352 qpair failed and we were unable to recover it. 00:38:38.352 [2024-07-10 23:42:47.344642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.344657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.344760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.344776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.344948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.344965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.345069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.345083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.345248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.345264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.345382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.345397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.345572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.345589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.345767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.345782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.345884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.345899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.345986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.346002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.346176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.346192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.346320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.346335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.346427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.346442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.346582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.346596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.346710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.346725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.346874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.346890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.347120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.347135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.347230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.347245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.347417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.347433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.347708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.347723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.347799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.347814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.347990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.348005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.348172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.348187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.348283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.348299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.348391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.348428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.348608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.348623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.348830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.348844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.348937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.348953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.349110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.349125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.353 qpair failed and we were unable to recover it. 00:38:38.353 [2024-07-10 23:42:47.349291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.353 [2024-07-10 23:42:47.349306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.349546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.349561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.349819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.349834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.349939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.349955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.350068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.350083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.350184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.350200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.350294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.350310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.350421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.350436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.350637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.350652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.350877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.350893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.351091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.351107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.351292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.351309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.351436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.351453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.351576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.351593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.351830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.351846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.352036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.352052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.352183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.352199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.352360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.352377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.352478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.352494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.352594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.352609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.352842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.352857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.353051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.353066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.353143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.353162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.353445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.353460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.353624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.353640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.353809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.353824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.353932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.353947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.354051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.354067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.354340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.354355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.354529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.354544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.354823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.354839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.355068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.355084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.355274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.355290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.355468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.355484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.355646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.355660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.355886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.355901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.356010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.356025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.356261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.356277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.356371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.356386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.356565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.356580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.356692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.356707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.354 qpair failed and we were unable to recover it. 00:38:38.354 [2024-07-10 23:42:47.356793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.354 [2024-07-10 23:42:47.356808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.356943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.356958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.357058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.357073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.357182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.357198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.357360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.357376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.357485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.357499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.357660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.357676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.357858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.357874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.357982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.357997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.358114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.358133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.358342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.358357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.358537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.358552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.358720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.358735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.358847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.358862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.358938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.358953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.359069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.359084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.359184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.359202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.359302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.359319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.359482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.359501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.359607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.359622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.359716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.359732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.359834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.359850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.359942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.359957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.360116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.360131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.360255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.360271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.360367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.360382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.360636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.360651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.355 [2024-07-10 23:42:47.360819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.355 [2024-07-10 23:42:47.360834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.355 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.361018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.361034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.361185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.361202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.361293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.361309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2673371 Killed "${NVMF_APP[@]}" "$@" 00:38:38.637 [2024-07-10 23:42:47.361419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.361435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.361544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.361559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.361688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.361704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.361906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.361922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 23:42:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:38:38.637 [2024-07-10 23:42:47.362035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.362050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.362155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.362175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.362369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.362385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 23:42:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.362545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.362560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 23:42:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:38.637 [2024-07-10 23:42:47.362814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.362830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 23:42:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:38.637 [2024-07-10 23:42:47.362988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.363003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.363142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.363163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 23:42:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:38.637 [2024-07-10 23:42:47.363291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.363306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.363473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.363490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.363663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.363678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.363911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.363927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.363993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.364008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.364117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.364132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.364241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.364257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.364436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.364452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.364682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.637 [2024-07-10 23:42:47.364699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.637 qpair failed and we were unable to recover it. 00:38:38.637 [2024-07-10 23:42:47.364863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.364879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.365061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.365077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.365231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.365248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.365413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.365428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.365666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.365682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.365908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.365923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.366032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.366047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.366155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.366175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.366340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.366355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.366529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.366546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.366724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.366751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.366846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.366860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.367036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.367051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.367179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.367210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.367321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.367336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.367583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.367599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.367699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.367728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.367962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.367977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.368150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.368168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.368294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.368309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.368492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.368507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.368606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.368622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.368684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.368700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.368795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.368810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.368898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.368913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.369088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.369102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.369218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.369233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.369351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.369365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.369474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.369489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 23:42:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2674234 00:38:38.638 [2024-07-10 23:42:47.369718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.369736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 23:42:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2674234 00:38:38.638 [2024-07-10 23:42:47.369935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.369951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.370055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 23:42:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:38.638 [2024-07-10 23:42:47.370074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.370210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.370226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 23:42:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2674234 ']' 00:38:38.638 [2024-07-10 23:42:47.370375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.370390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 23:42:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:38.638 [2024-07-10 23:42:47.370694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.370710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 23:42:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:38.638 [2024-07-10 23:42:47.370918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.370933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 [2024-07-10 23:42:47.371055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.638 [2024-07-10 23:42:47.371070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.638 qpair failed and we were unable to recover it. 00:38:38.638 23:42:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:38.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:38.638 [2024-07-10 23:42:47.371297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.371312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 23:42:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:38.639 [2024-07-10 23:42:47.371481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.371496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 23:42:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:38.639 [2024-07-10 23:42:47.371663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.371679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.371962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.371977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.372157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.372174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.372294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.372309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.372538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.372552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.372728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.372743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.372929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.372944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.373060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.373074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.373318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.373333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.373592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.373607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.373776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.373791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.373896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.373910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.374128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.374143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.374353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.374369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.374475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.374490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.374667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.374682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.374786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.374812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.374927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.374942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.375082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.375096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.375218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.375233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.375407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.375422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.375630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.375645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.375744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.375759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.376032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.376047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.376142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.376162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.376358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.376374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.376601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.376645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.376828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.376871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.377103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.377146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.377403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.377420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.377586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.377601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.377683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.377699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.377821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.377836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.378012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.378028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.378132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.378146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.639 [2024-07-10 23:42:47.378274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.639 [2024-07-10 23:42:47.378290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.639 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.378469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.378484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.378681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.378696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.378858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.378873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.378999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.379015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.379126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.379140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.379326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.379342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.379464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.379479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.379707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.379722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.379877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.379892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.380013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.380029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.380176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.380198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.380315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.380329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.380497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.380511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.380675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.380690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.380835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.380854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.380957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.380972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.381070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.381085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.381226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.381241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.381340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.381354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.381517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.381532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.381616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.381630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.381802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.381816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.381951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.381966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.382129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.382154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.382276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.382291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.382463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.382477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.382628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.382642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.382807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.382822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.382992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.383007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.383177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.383191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.383330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.383357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.383466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.383486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.383624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.383644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.383755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.383777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.383987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.384007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.384117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.384136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.384331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.384353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.384523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.384543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.384658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.384677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.384852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.640 [2024-07-10 23:42:47.384873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.640 qpair failed and we were unable to recover it. 00:38:38.640 [2024-07-10 23:42:47.385046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.385066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.385197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.385217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.385323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.385343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.385469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.385492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.385611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.385631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.385734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.385753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.385876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.385895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.386070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.386091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.386222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.386243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.386361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.386381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.386493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.386513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.386690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.386713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.386896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.386916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.387065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.387085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.387289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.387308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.387508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.387528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.387779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.387800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.387982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.388002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.388141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.388170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.388380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.388399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.388537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.388558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.388691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.388716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.388895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.388914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.389028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.389048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.389169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.389190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.389312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.389331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.389456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.389476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.389585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.389605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.389744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.389765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.389892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.389909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.390056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.641 [2024-07-10 23:42:47.390084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.641 qpair failed and we were unable to recover it. 00:38:38.641 [2024-07-10 23:42:47.390221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.390249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.390427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.390449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.390628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.390647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.390833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.390854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.391100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.391120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.391297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.391317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.391424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.391444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.391630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.391650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.391787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.391807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.392069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.392090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.392344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.392364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.392548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.392569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.392712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.392732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.392859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.392879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.392992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.393013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.393210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.393232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.393393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.393413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.393578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.393598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.393784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.393804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.394070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.394090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.394304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.394326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.394432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.394451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.394636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.394656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.394763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.394783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.395086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.395106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.395301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.395322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.395543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.395563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.395876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.395896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.396013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.396033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.396314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.396335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.396431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.396451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.396734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.396754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.396887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.396906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.397039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.397059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.397226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.397246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.397436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.397456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.397581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.397602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.397817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.397837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.398020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.398040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.398299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.398323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.398466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.642 [2024-07-10 23:42:47.398486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.642 qpair failed and we were unable to recover it. 00:38:38.642 [2024-07-10 23:42:47.398724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.398744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.398927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.398946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.399125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.399149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.399312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.399333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.399447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.399466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.399665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.399686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.399855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.399875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.400069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.400088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.400333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.400353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.400485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.400504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.400624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.400644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.400853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.400872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.401050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.401070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.401261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.401282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.401429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.401449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.401626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.401645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.401771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.401797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.401984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.402004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.402174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.402195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.402370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.402390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.402603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.402622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.402765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.402786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.403025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.403045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.403210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.403230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.403434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.403454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.403657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.403676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.403928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.403948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.404065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.404086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.404261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.404282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.404538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.404557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.404770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.404789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.405057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.405078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.405284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.405304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.405476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.405496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.405628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.405647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.405775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.405795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.405912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.405930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.406110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.406125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.643 qpair failed and we were unable to recover it. 00:38:38.643 [2024-07-10 23:42:47.406247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.643 [2024-07-10 23:42:47.406264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.406493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.406508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.406682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.406697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.406928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.406943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.407181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.407197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.407375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.407391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.407554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.407569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.407683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.407698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.407886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.407901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.408102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.408117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.408229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.408245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.408428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.408443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.408570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.408585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.408710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.408725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.408938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.408954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.409075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.409090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.409318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.409334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.409517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.409531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.409705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.409721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.409889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.409904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.410152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.410173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.410297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.410313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.410484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.410499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.410603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.410618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.410779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.410794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.411030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.411046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.411127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.411142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.411323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.411338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.411501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.411517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.411637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.411653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.411751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.411767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.411958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.411974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.412077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.412094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.412307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.412323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.412557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.412571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.412767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.412782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.413068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.413083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.413274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.413290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.413493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.413509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.413612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.413626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.413737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.644 [2024-07-10 23:42:47.413755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.644 qpair failed and we were unable to recover it. 00:38:38.644 [2024-07-10 23:42:47.414037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.414052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.414253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.414269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.414450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.414466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.414596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.414612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.414775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.414791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.414968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.414982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.415216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.415232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.415359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.415374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.415557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.415572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.415756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.415771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.415955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.415971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.416134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.416150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.416411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.416431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.416620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.416640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.416826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.416842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.417070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.417086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.417192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.417208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.417333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.417350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.417488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.417503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.417632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.417647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.417832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.417847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.418096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.418111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.418329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.418344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.418456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.418471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.418640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.418656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.418773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.418788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.418902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.418917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.419099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.419114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.419300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.419316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.419448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.419463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.419573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.419589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.419844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.419859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.420031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.420046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.420241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.420257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.420379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.420394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.420505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.420520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.420792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.420807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.421078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.421093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.421283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.421300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.421435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.421453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.421565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.645 [2024-07-10 23:42:47.421581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.645 qpair failed and we were unable to recover it. 00:38:38.645 [2024-07-10 23:42:47.421756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.421771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.421960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.421976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.422245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.422261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.422434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.422449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.422651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.422666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.422869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.422884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.423075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.423090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.423307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.423324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.423554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.423569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.423760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.423774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.424023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.424038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.424212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.424227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.424355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.424370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.424549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.424565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.424693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.424708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.424829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.424844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.425041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.425056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.425292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.425308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.425426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.425442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.425562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.425577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.425699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.425714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.425895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.425909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.426208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.426224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.426320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.426335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.426479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.426494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.426684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.426700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.426953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.426968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.427152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.427173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.427356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.427372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.427499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.427514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.427701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.427715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.427901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.427916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.428151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.428174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.428434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.428449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.428574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.646 [2024-07-10 23:42:47.428589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.646 qpair failed and we were unable to recover it. 00:38:38.646 [2024-07-10 23:42:47.428701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.428716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.429001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.429028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.429151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.429173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.429348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.429366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.429503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.429519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.429694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.429709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.429967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.429982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.430101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.430116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.430238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.430253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.430463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.430478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.430592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.430607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.430773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.430788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.430967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.430982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.431208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.431223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.431430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.431445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.431695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.431710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.431956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.431971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.432186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.432202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.432366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.432381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.432560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.432575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.432690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.432705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.432872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.432887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.433114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.433130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.433315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.433330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.433519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.433534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.433711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.433726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.433832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.433846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.433941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.433956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.434126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.434141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.434317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.434333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.434493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.434508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.434672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.434687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.434854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.434870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.434990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.435005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.435253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.435269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.435495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.435510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.435625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.435640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.435847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.435862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.436038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.436054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.436235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.436251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.647 qpair failed and we were unable to recover it. 00:38:38.647 [2024-07-10 23:42:47.436366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.647 [2024-07-10 23:42:47.436381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.436637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.436653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.436953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.436969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.437198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.437217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.437421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.437437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.437561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.437576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.437702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.437716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.437923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.437938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.438102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.438117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.438295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.438310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.438439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.438455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.438627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.438643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.438843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.438858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.439038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.439054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.439324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.439340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.439523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.439538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.439745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.439760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.439961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.439975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.440249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.440264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.440467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.440481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.440601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.440615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.440718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.440733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.440932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.440948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.441087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.441103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.441293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.441313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.441539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.441560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.441739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.441754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.442028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.442044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.442282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.442298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.442501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.442516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.442693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.442708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.442946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.442960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.443165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.443181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.443310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.443324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.443458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.443473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.443656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.443671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.443850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.443866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.444064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.444078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.444248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.444264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.444504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.444519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.444650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.444665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.648 qpair failed and we were unable to recover it. 00:38:38.648 [2024-07-10 23:42:47.444785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.648 [2024-07-10 23:42:47.444799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.444906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.444921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.445026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.445045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.445237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.445253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.445450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.445465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.445719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.445733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.445915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.445929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.446046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.446062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.446249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.446265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.446392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.446407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.446513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.446528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.446734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.446748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.446971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.446985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.447098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.447113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.447296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.447311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.447482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.447497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.447682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.447697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.447754] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:38:38.649 [2024-07-10 23:42:47.447834] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:38.649 [2024-07-10 23:42:47.447857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.447871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.448070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.448082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.448338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.448351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.448528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.448543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.448666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.448682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.448878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.448892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.449092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.449107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.449284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.449300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.449429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.449443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.449553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.449567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.449766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.449780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.449955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.449970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.450094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.450109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.450330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.450346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.450601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.450617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.450724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.450738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.450913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.450928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.451101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.451115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.451363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.451379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.451501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.451517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.451631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.451646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.451840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.451855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.452101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.452116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.649 [2024-07-10 23:42:47.452336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.649 [2024-07-10 23:42:47.452353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.649 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.452611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.452626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.452816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.452831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.452943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.452958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.453223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.453239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.453420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.453435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.453614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.453629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.453801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.453819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.453990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.454011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.454118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.454133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.454342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.454358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.454561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.454576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.454683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.454698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.454926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.454940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.455151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.455172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.455386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.455401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.455521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.455536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.455713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.455728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.455974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.455989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.456242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.456258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.456461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.456477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.456610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.456625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.456730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.456745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.456910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.456925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.457046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.457060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.457240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.457255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.457377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.457392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.457637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.457652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.457843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.457858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.458134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.458149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.458400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.458417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.458618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.458633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.458839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.458854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.459051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.459066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.459266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.459282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.459464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.459480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.650 qpair failed and we were unable to recover it. 00:38:38.650 [2024-07-10 23:42:47.459606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.650 [2024-07-10 23:42:47.459621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.459837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.459851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.460045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.460061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.460183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.460200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.460338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.460353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.460551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.460569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.460825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.460841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.461121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.461137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.461364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.461379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.461609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.461625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.461803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.461819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.461991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.462006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.462239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.462255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.462363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.462378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.462493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.462508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.462642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.462658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.462930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.462944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.463047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.463062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.463228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.463243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.463365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.463379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.463559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.463574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.463763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.463778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.463955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.463969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.464132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.464148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.464364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.464380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.464491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.464506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.464669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.464684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.464780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.464795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.464978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.464993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.465246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.465262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.465374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.465389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.465491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.465506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.465688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.465704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.465904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.465919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.466093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.466109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.466253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.466271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.466455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.466475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.466605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.466620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.466877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.466892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.467092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.467108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.651 [2024-07-10 23:42:47.467249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.651 [2024-07-10 23:42:47.467265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.651 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.467435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.467450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.467632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.467647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.467907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.467923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.468193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.468209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.468322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.468339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.468464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.468480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.468643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.468657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.468853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.468868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.469092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.469107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.469211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.469226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.469392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.469407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.469546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.469561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.469723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.469739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.469859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.469875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.470171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.470187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.470448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.470464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.470575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.470590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.470871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.470886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.471145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.471164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.471363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.471378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.471607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.471622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.471827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.471842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.472023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.472037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.472205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.472221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.472449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.472464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.472638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.472652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.472846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.472862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.473116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.473131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.473316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.473332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.473454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.473470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.473629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.473644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.473793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.473808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.473982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.473997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.474212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.474229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.474331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.474346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.474465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.474480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.474712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.474727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.474852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.474867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.475030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.475045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.475174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.475190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.475459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.652 [2024-07-10 23:42:47.475474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.652 qpair failed and we were unable to recover it. 00:38:38.652 [2024-07-10 23:42:47.475746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.475761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.475936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.475952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.476064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.476078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.476355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.476374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.476611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.476626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.476883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.476897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.477127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.477142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.477345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.477360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.477569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.477584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.477719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.477734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.477921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.477936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.478058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.478073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.478237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.478254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.478420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.478434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.478639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.478654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.478878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.478892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.479073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.479088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.479328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.479346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.479577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.479598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.479846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.479861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.479999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.480013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.480265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.480281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.480476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.480491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.480620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.480634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.480899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.480914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.481087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.481102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.481327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.481342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.481506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.481521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.481688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.481703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.481822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.481838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.482092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.482107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.482239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.482254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.482360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.482376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.482539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.482554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.482843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.482858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.483110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.483125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.483332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.483348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.483459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.483474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.483602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.483617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.483789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.483804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.483987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.653 [2024-07-10 23:42:47.484003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.653 qpair failed and we were unable to recover it. 00:38:38.653 [2024-07-10 23:42:47.484279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.484296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.484523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.484538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.484636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.484653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.484896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.484912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.485023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.485038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.485205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.485220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.485402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.485417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.485669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.485685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.485961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.485976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.486154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.486174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.486299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.486315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.486427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.486442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.486635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.486649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.486902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.486917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.487082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.487097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.487308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.487324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.487508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.487523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.487697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.487712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.487961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.487976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.488138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.488153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.488348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.488363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.488494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.488508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.488693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.488708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.488933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.488948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.489207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.489224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.489403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.489419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.489582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.489598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.489696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.489710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.489992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.490007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.490179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.490195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.490354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.490369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.490645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.490660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.490913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.490929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.491108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.491122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.491288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.491303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.491483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.654 [2024-07-10 23:42:47.491498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.654 qpair failed and we were unable to recover it. 00:38:38.654 [2024-07-10 23:42:47.491629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.491644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.491970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.491985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.492221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.492236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.492423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.492439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.492700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.492716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.492900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.492919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.493088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.493106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.493289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.493305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.493493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.493508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.493689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.493704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.493938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.493954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.494131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.494145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.494337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.494352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.494537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.494551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.494771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.494785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.494992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.495006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.495183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.495198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.495314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.495329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.495510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.495524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.495706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.495721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.495914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.495929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.496120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.496135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.496329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.496344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.496449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.496463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.496623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.496638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.496759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.496774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.496977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.496992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.497173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.497189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.497307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.497322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.497413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.497428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.497534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.497549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.497681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.497696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.497912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.497927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.498044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.498060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.498304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.498319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.498445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.498460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.498690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.498705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.498956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.498970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.499142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.499157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.499313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.655 [2024-07-10 23:42:47.499329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.655 qpair failed and we were unable to recover it. 00:38:38.655 [2024-07-10 23:42:47.499509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.499524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.499697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.499712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.499898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.499914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.500169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.500185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.500351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.500366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.500578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.500593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.500714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.500731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.500971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.500986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.501095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.501110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.501212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.501227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.501408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.501423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.501594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.501608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.501736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.501750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.501922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.501938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.502108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.502123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.502255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.502271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.502454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.502470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.502739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.502754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.503019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.503035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.503304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.503321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.503500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.503516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.503776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.503791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.503968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.503982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.504153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.504173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.504347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.504362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.504621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.504636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.504836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.504851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.505118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.505133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.505310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.505328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.505535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.505554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.505744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.505759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.505955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.505971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.506077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.506092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.506295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.506310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.506491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.506507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.506689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.506704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.506952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.506967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.507085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.507101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.507293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.507308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.507479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.507494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.507675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.507690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.656 [2024-07-10 23:42:47.507863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.656 [2024-07-10 23:42:47.507879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.656 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.508073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.508089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.508276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.508291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.508389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.508404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.508567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.508583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.508754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.508772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.509041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.509055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.509257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.509272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.509455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.509470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.509587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.509603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.509719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.509734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.509913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.509927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.510107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.510121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 EAL: No free 2048 kB hugepages reported on node 1 00:38:38.657 [2024-07-10 23:42:47.510368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.510384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.510564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.510580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.510776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.510791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.511028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.511043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.511205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.511220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.511390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.511405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.511569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.511584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.511805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.511820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.512070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.512085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.512257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.512273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.512458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.512473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.512699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.512714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.512939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.512954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.513217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.513232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.513414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.513429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.513553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.513568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.513685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.513701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.513881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.513900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.514198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.514214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.514378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.514394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.514503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.514519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.514634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.514648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.514760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.514775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.514999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.515015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.515250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.515265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.515440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.515455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.515613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.657 [2024-07-10 23:42:47.515627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.657 qpair failed and we were unable to recover it. 00:38:38.657 [2024-07-10 23:42:47.515819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.515834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.516063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.516078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.516305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.516321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.516553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.516568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.516677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.516691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.516935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.516953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.517124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.517139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.517331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.517346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.517519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.517534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.517767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.517782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.517986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.518000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.518258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.518274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.518502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.518523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.518660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.518675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.518943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.518958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.519125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.519141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.519390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.519405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.519527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.519542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.519677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.519692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.519876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.519891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.520057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.520072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.520310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.520325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.520528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.520544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.520744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.520759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.520987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.521002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.521243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.521257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.521483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.521498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.521623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.521638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.521817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.521832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.522121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.522136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.522252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.522268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.522517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.522532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.522706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.522721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.522918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.522932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.523124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.523138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.523412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.658 [2024-07-10 23:42:47.523427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.658 qpair failed and we were unable to recover it. 00:38:38.658 [2024-07-10 23:42:47.523546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.523560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.523787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.523801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.523907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.523921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.524145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.524175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.524359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.524374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.524510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.524524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.524636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.524651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.524930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.524944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.525186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.525202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.525368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.525385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.525547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.525561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.525749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.525765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.526020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.526034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.526303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.526318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.526478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.526493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.526741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.526756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.527027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.527042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.527225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.527240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.527497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.527512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.527713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.527727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.528002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.528017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.528187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.528202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.528428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.528443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.528692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.528708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.528885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.528900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.529126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.529142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.529358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.529373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.529636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.529652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.529848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.529862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.530034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.530053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.530224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.530239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.530342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.530357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.530611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.530626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.530816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.530831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.530997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.531012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.531291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.531306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.531438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.531453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.531612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.531627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.531823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.531838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.532095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.532110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.532347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.659 [2024-07-10 23:42:47.532366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.659 qpair failed and we were unable to recover it. 00:38:38.659 [2024-07-10 23:42:47.532611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.532626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.532741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.532757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.532958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.532973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.533148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.533168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.533462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.533477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.533651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.533667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.533828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.533843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.534026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.534041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.534268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.534285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.534467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.534483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.534733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.534749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.534923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.534938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.535097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.535112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.535372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.535387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.535640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.535655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.535895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.535910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.536103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.536118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.536225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.536240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.536357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.536371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.536548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.536562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.536738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.536752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.536921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.536936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.537046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.537061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.537263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.537278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.537473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.537488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.537730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.537744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.537865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.537880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.538114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.538128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.538305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.538320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.538448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.538464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.538589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.538604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.538786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.538800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.539087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.539102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.539354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.539369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.539596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.539612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.539845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.539860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.540051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.540066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.540303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.540318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.540543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.540558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.540687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.660 [2024-07-10 23:42:47.540701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.660 qpair failed and we were unable to recover it. 00:38:38.660 [2024-07-10 23:42:47.540880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.540895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.541004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.541020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.541198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.541212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.541462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.541476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.541649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.541664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.541928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.541943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.542138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.542153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.542386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.542401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.542635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.542655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.542863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.542879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.543127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.543142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.543341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.543357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.543595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.543610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.543783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.543797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.544025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.544040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.544206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.544222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.544423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.544437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.544679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.544695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.544811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.544826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.545095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.545109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.545299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.545315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.545569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.545584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.545779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.545794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.546069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.546103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.546286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.546301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.546407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.546422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.546648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.546663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.546831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.546845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.547019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.547033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.547224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.547239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.547409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.547424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.547669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.547683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.547891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.547907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.548173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.548189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.548420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.548434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.548549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.548564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.548688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.548703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.548897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.548912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.549183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.549198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.549378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.661 [2024-07-10 23:42:47.549393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.661 qpair failed and we were unable to recover it. 00:38:38.661 [2024-07-10 23:42:47.549631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.549646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.549877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.549891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.550119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.550134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.550344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.550360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.550463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.550478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.550707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.550722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.550885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.550899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.551157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.551177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.551421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.551439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.551657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.551672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.551805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.551820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.552018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.552032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.552128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.552143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.552332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.552348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.552574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.552589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.552843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.552858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.553021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.553036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.553211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.553226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.553411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.553426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.553677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.553692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.553815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.553830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.554091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.554106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.554363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.554377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.554607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.554623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.554727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.554742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.554999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.555015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.555181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.555196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.555368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.555383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.555640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.555655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.555819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.555833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.556021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.556036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.556265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.556280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.556482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.556497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.556726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.556741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.556919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.556933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.557180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.557195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.557376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.557395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.557555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.557571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.557747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.557762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.558021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.558035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.662 [2024-07-10 23:42:47.558227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.662 [2024-07-10 23:42:47.558243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.662 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.558525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.558539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.558728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.558743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.558846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.558861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.558973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.558987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.559239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.559253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.559510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.559525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.559773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.559792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.560027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.560044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.560293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.560309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.560540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.560555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.560754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.560769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.560947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.560962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.561133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.561148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.561407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.561422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.561601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.561615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.561806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.561820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.561983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.561998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.562173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.562188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.562350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.562365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.562589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.562603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.562759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.562774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.563013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.563029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.563284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.563299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.563497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.563512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.563770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.563785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.563993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.564008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.564111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.564127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.564414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.564430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.564626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.564641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.564869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.564884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.565067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.565081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.565266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.565281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.565473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.565487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.565719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.663 [2024-07-10 23:42:47.565734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.663 qpair failed and we were unable to recover it. 00:38:38.663 [2024-07-10 23:42:47.565921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.565936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.566125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.566140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.566391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.566406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.566579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.566594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.566756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.566771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.567016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.567032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.567257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.567273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.567434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.567450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.567627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.567642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.567837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.567853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.568097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.568111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.568233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.568249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.568420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.568435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.568609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.568626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.568730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.568747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.568976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.568990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.569166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.569182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.569435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.569450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.569697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.569711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.569948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.569962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.570136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.570150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.570325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.570340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.570509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.570524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.570754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.570769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.571021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.571035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.571288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.571304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.571533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.571548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.571803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.571818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.571943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.571958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.572136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.572151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.572323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.572339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.572619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.572633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.572889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.572903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.573093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.573108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.573403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.573418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.573661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.573677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.573878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.573897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.574010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.574024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.574297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.574312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.574541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.574556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.574851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.664 [2024-07-10 23:42:47.574886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.664 qpair failed and we were unable to recover it. 00:38:38.664 [2024-07-10 23:42:47.575108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.575139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.575336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.575357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.575536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.575557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.575667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.575688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.575977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.575997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.576207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.576228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.576485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.576505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.576719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.576739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.576829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:38.665 [2024-07-10 23:42:47.576977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.576997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.577170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.577190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.577372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.577392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.577566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.577583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.577847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.577869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.578128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.578148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.578408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.578428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.578566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.578586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.578780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.578799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.578928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.578948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.579203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.579224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.579403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.579422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.579681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.579701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.579961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.579981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.580165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.580187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.580378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.580398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.580635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.580655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.580774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.580797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.581048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.581068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.581258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.581278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.581542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.581562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.581825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.581845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.582078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.582098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.582270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.582290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.582463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.582485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.582730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.582750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.582949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.582969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.583230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.583250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.583453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.583473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.583732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.583751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.583946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.583966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.584166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.584187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.584297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.665 [2024-07-10 23:42:47.584317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.665 qpair failed and we were unable to recover it. 00:38:38.665 [2024-07-10 23:42:47.584555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.584575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.584763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.584783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.584972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.584992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.585255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.585276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.585514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.585535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.585710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.585730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.586012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.586033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.586212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.586233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.586516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.586536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.586747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.586767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.586961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.586982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.587275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.587314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.587537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.587562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.587847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.587865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.588097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.588113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.588286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.588302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.588499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.588514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.588636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.588651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.588822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.588838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.589067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.589082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.589336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.589352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.589605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.589621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.589869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.589884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.590051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.590066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.590319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.590337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.590520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.590536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.590709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.590724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.590921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.590935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.591194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.591209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.591339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.591354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.591533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.591548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.591748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.591762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.591945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.591960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.592195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.592210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.592451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.592466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.592713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.592728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.592834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.592849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.593075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.593089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.666 qpair failed and we were unable to recover it. 00:38:38.666 [2024-07-10 23:42:47.593265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.666 [2024-07-10 23:42:47.593281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.593533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.593548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.593802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.593817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.594048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.594063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.594289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.594304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.594499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.594513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.594738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.594753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.594859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.594873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.595066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.595081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.595204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.595220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.595446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.595461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.595688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.595703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.595878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.595893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.596107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.596137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.596283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.596307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.596573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.596593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.596844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.596863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.597075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.597096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.597358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.597380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.597639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.597658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.597833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.597853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.598057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.598078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.598317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.598337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.598626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.598646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.598908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.598927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.599121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.599142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.599410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.599435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.599620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.599641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.599751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.599771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.600060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.600080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.600349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.600370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.600574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.600594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.600777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.667 [2024-07-10 23:42:47.600798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.667 qpair failed and we were unable to recover it. 00:38:38.667 [2024-07-10 23:42:47.600988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.601009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.601281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.601301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.601477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.601497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.601764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.601784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.602004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.602024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.602282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.602302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.602428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.602448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.602757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.602774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.602976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.602991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.603216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.603231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.603358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.603373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.603540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.603555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.603810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.603825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.604078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.604094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.604325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.604341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.604461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.604475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.604599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.604617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.604858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.604872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.605124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.605139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.605261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.605276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.605452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.605469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.605708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.605723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.605971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.605987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.606156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.606174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.606427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.606442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.606618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.606632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.606808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.606824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.606952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.606968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.607217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.607237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.607432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.607447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.607617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.607631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.607875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.607889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.608065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.608079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.608332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.608347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.608522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.608537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.608713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.608727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.608929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.608944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.609129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.609144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.609443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.609458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.609708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.609723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.609999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.668 [2024-07-10 23:42:47.610014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.668 qpair failed and we were unable to recover it. 00:38:38.668 [2024-07-10 23:42:47.610241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.610256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.610421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.610436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.610660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.610675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.610834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.610849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.611088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.611103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.611371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.611387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.611497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.611512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.611766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.611781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.611952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.611968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.612170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.612185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.612362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.612377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.612622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.612636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.612820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.612835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.613089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.613104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.613206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.613221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.613420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.613435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.613604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.613619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.613738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.613753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.613914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.613929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.614156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.614177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.614428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.614444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.614668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.614683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.614937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.614952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.615109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.615124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.615380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.615395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.615571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.615586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.615796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.615810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.616005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.616020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.616129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.616144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.616385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.616401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.616651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.616665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.616943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.616957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.617214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.617228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.617414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.617429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.617678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.617692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.617884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.617899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.618171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.618187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.618416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.618430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.618685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.618700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.618880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.618895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.669 qpair failed and we were unable to recover it. 00:38:38.669 [2024-07-10 23:42:47.619053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.669 [2024-07-10 23:42:47.619068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.619257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.619272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.619526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.619541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.619703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.619718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.619944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.619959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.620186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.620201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.620434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.620449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.620563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.620578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.620779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.620794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.620971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.620986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.621290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.621305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.621538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.621563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.621746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.621761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.621990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.622005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.622255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.622270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.622527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.622542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.622753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.622768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.623021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.623036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.623264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.623280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.623529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.623547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.623723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.623738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.623896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.623912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.624107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.624122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.624372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.624388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.624550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.624565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.624739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.624754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.625009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.625024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.625203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.625218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.625413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.625428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.625678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.625692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.625941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.625957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.626198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.626214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.626474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.626489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.626666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.626681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.626931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.626946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.627199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.627215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.627386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.627401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.627637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.627653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.627771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.627786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.628037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.670 [2024-07-10 23:42:47.628051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.670 qpair failed and we were unable to recover it. 00:38:38.670 [2024-07-10 23:42:47.628260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.628276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.628441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.628456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.628643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.628658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.628850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.628865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.629027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.629041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.629254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.629269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.629456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.629471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.629736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.629751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.629924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.629939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.630127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.630142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.630352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.630368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.630489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.630504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.630774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.630789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.630952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.630966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.631215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.631231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.631342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.631357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.631472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.631487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.631732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.631746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.631998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.632013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.632257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.632275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.632451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.632466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.632579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.632593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.632771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.632786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.633011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.633029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.633214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.633229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.633486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.633500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.633660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.633674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.633898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.633913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.634139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.634154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.634425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.634441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.634662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.634677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.634854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.634869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.635141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.635156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.635283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.635298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.635506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.671 [2024-07-10 23:42:47.635521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.671 qpair failed and we were unable to recover it. 00:38:38.671 [2024-07-10 23:42:47.635794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.635818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.636072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.636087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.636340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.636355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.636580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.636595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.636855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.636870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.637033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.637048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.637223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.637238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.637439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.637455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.637639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.637654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.637878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.637893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.637989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.638004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.638193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.638208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.638433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.638448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.638635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.638650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.638898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.638913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.639166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.639180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.639415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.639430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.639677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.639692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.639917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.639933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.640112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.640126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.640333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.640350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.640578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.640593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.640853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.640868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.641026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.641041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.641292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.641310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.641560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.641575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.641753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.641769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.642051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.642066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.642259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.642274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.642438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.642454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.642705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.642720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.642969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.642984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.643220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.643235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.643510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.643526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.643745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.643761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.643924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.643939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.644191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.644206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.644457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.644472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.644646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.644661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.644911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.644926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.645090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.672 [2024-07-10 23:42:47.645105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.672 qpair failed and we were unable to recover it. 00:38:38.672 [2024-07-10 23:42:47.645295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.645312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.645581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.645595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.645774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.645787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.645984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.645998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.646228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.646241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.646447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.646461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.646638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.646651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.646902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.646915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.647053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.647067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.647326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.647340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.647459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.647472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.647653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.647667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.647899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.647912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.648015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.648029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.648324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.648338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.648514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.648527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.648756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.648770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.648947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.648961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.649085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.649098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.649377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.649391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.649576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.649589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.649835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.649848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.649968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.649982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.650102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.650123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.650385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.650399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.650665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.650679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.650907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.650920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.651093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.651106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.651354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.651367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.651561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.651575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.651781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.651794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.651966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.651980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.652176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.652190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.652453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.652466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.652701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.652714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.652827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.652840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.653051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.653064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.653305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.653319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.653561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.653574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.653705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.653718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.673 qpair failed and we were unable to recover it. 00:38:38.673 [2024-07-10 23:42:47.653898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.673 [2024-07-10 23:42:47.653911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.654088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.654101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.654261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.654274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.654450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.654466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.654741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.654754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.654998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.655011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.655285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.655298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.655468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.655482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.655686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.655700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.655878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.655892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.656074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.656087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.656359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.656372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.656492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.656505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.656618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.656631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.656746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.656759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.656930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.656943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.657192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.657206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.657332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.657345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.657589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.657602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.657815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.657828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.658108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.658121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.658243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.658256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.658508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.658521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.658714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.658729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.658937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.658950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.659233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.659247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.659500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.659513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.659684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.659696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.659881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.659894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.660143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.660155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.660349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.660361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.660561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.660573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.660823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.660836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.661039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.661052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.661283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.661297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.661506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.661519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.661771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.661783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.662062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.662075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.662277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.662290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.662565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.662577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.662739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.662751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.674 qpair failed and we were unable to recover it. 00:38:38.674 [2024-07-10 23:42:47.662985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.674 [2024-07-10 23:42:47.662998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.663227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.663240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.663414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.663427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.663623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.663635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.663889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.663902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.664073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.664087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.664281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.664299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.664560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.664573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.664734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.664746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.664979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.664992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.665182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.665195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.665328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.665341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.665513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.665526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.665701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.665714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.665899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.665912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.666072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.666085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.666290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.666303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.666488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.666501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.666609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.666622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.666825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.666837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.667092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.667105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.667356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.667369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.667610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.667626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.667816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.667829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.667958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.667971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.668253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.668266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.668402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.668414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.668637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.668650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.668830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.668843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.669099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.669112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.669344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.669357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.669490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.669502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.669694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.669707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.669932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.669945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.670145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.670172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.670300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.670313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.670491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.670504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.670673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.670686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.670918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.670930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.671106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.671119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.671306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.671319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.675 [2024-07-10 23:42:47.671456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.675 [2024-07-10 23:42:47.671468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.675 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.671570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.671582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.671762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.671774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.672032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.672044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.672222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.672235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.672408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.672421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.672592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.672605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.672780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.672793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.672993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.673006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.673207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.673220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.673336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.673349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.673522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.673534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.673790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.673803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.673919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.673931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.674114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.674126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.674369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.674382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.674558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.674571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.674745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.674758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.674999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.675011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.675209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.675223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.675403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.675416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.675665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.675680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.675773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.675785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.675989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.676002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.676242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.676256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.676481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.676494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.676731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.676743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.676987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.677001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.677278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.677294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.677555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.677573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.677835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.677848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.678041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.678053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.678251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.678264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.678394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.678408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.678508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.678521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.678770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.678783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.676 qpair failed and we were unable to recover it. 00:38:38.676 [2024-07-10 23:42:47.679036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.676 [2024-07-10 23:42:47.679049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.677 qpair failed and we were unable to recover it. 00:38:38.677 [2024-07-10 23:42:47.679308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.677 [2024-07-10 23:42:47.679321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.677 qpair failed and we were unable to recover it. 00:38:38.677 [2024-07-10 23:42:47.679509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.677 [2024-07-10 23:42:47.679522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.677 qpair failed and we were unable to recover it. 00:38:38.677 [2024-07-10 23:42:47.679651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.677 [2024-07-10 23:42:47.679663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.677 qpair failed and we were unable to recover it. 00:38:38.677 [2024-07-10 23:42:47.679847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.677 [2024-07-10 23:42:47.679860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.677 qpair failed and we were unable to recover it. 00:38:38.677 [2024-07-10 23:42:47.680103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.677 [2024-07-10 23:42:47.680116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.677 qpair failed and we were unable to recover it. 00:38:38.677 [2024-07-10 23:42:47.680405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.677 [2024-07-10 23:42:47.680418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.677 qpair failed and we were unable to recover it. 00:38:38.677 [2024-07-10 23:42:47.680656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.677 [2024-07-10 23:42:47.680669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.677 qpair failed and we were unable to recover it. 00:38:38.677 [2024-07-10 23:42:47.680871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.677 [2024-07-10 23:42:47.680883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.677 qpair failed and we were unable to recover it. 00:38:38.677 [2024-07-10 23:42:47.680989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.677 [2024-07-10 23:42:47.681001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.677 qpair failed and we were unable to recover it. 00:38:38.677 [2024-07-10 23:42:47.681269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.677 [2024-07-10 23:42:47.681282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.677 qpair failed and we were unable to recover it. 00:38:38.677 [2024-07-10 23:42:47.681468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.677 [2024-07-10 23:42:47.681481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.677 qpair failed and we were unable to recover it. 00:38:38.677 [2024-07-10 23:42:47.681659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.677 [2024-07-10 23:42:47.681672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.677 qpair failed and we were unable to recover it. 00:38:38.677 [2024-07-10 23:42:47.681929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.677 [2024-07-10 23:42:47.681942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.677 qpair failed and we were unable to recover it. 00:38:38.954 [2024-07-10 23:42:47.682168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.954 [2024-07-10 23:42:47.682182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.954 qpair failed and we were unable to recover it. 00:38:38.954 [2024-07-10 23:42:47.682471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.954 [2024-07-10 23:42:47.682486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.954 qpair failed and we were unable to recover it. 00:38:38.954 [2024-07-10 23:42:47.682752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.954 [2024-07-10 23:42:47.682766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.954 qpair failed and we were unable to recover it. 00:38:38.954 [2024-07-10 23:42:47.682955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.954 [2024-07-10 23:42:47.682967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.954 qpair failed and we were unable to recover it. 00:38:38.954 [2024-07-10 23:42:47.683127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.954 [2024-07-10 23:42:47.683140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.954 qpair failed and we were unable to recover it. 00:38:38.954 [2024-07-10 23:42:47.683354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.954 [2024-07-10 23:42:47.683368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.954 qpair failed and we were unable to recover it. 00:38:38.954 [2024-07-10 23:42:47.683544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.954 [2024-07-10 23:42:47.683556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.954 qpair failed and we were unable to recover it. 00:38:38.954 [2024-07-10 23:42:47.683737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.954 [2024-07-10 23:42:47.683750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.954 qpair failed and we were unable to recover it. 00:38:38.954 [2024-07-10 23:42:47.683928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.954 [2024-07-10 23:42:47.683940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.954 qpair failed and we were unable to recover it. 00:38:38.954 [2024-07-10 23:42:47.684131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.954 [2024-07-10 23:42:47.684144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.954 qpair failed and we were unable to recover it. 00:38:38.954 [2024-07-10 23:42:47.684390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.954 [2024-07-10 23:42:47.684404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.954 qpair failed and we were unable to recover it. 00:38:38.954 [2024-07-10 23:42:47.684581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.684597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.684843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.684855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.685043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.685055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.685249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.685263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.685497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.685510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.685620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.685632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.685801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.685814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.685976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.685989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.686151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.686170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.686315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.686328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.686584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.686598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.686775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.686788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.686964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.686976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.687205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.687218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.687412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.687425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.687671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.687684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.687985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.687997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.688227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.688241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.688495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.688508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.688714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.688726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.688920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.688934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.689130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.689144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.689383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.689399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.689630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.689644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.689898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.689913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.690173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.690188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.690435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.690449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.690683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.690698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.690929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.690943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.691114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.691128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.691381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.691397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.691574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.691589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.691768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.691787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.691909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.691923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.692182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.692198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.692395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.692409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.692606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.692621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.692742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.692756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.692984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.692999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.693264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.693279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.693511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.955 [2024-07-10 23:42:47.693531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.955 qpair failed and we were unable to recover it. 00:38:38.955 [2024-07-10 23:42:47.693755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.693770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.694001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.694016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.694252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.694267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.694540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.694554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.694792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.694805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.695036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.695051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.695260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.695274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.695504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.695518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.695692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.695706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.695957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.695972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.696155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.696173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.696346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.696360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.696616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.696630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.696928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.696943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.697181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.697196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.697381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.697395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.697644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.697658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.697837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.697851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.698055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.698070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.698252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.698266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.698540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.698553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.698731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.698744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.698972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.698985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.699156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.699172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.699376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.699390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.699567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.699580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.699801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.699837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.700051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.700085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.700394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.700427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.700644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.700660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.700878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.700891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.701142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.701154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.701405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.701418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.701662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.701675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.701873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.701886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.702169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.702183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.702304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.702317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.702567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.702580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.702816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.702829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.703002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.703018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.703269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.956 [2024-07-10 23:42:47.703283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.956 qpair failed and we were unable to recover it. 00:38:38.956 [2024-07-10 23:42:47.703468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.703481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.703728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.703741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.703913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.703926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.704120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.704134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.704244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.704258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.704380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.704394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.704618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.704632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.704860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.704873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.705138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.705151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.705400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.705413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.705655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.705668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.705852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.705864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.706024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.706037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.706243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.706257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.706461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.706474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.706672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.706685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.706885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.706901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.707063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.707083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.707353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.707367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.707576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.707589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.707830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.707843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.708134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.708147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.708311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.708324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.708561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.708574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.708747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.708760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.708944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.708965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.709198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.709222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.709499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.709525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.709760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.709775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.709971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.709984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.710184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.710198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.710402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.710415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.710663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.710676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.710960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.710973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.711144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.711157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.711359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.711372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.711613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.711626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.711753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.711766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.711960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.711975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.712226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.712240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.712518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.957 [2024-07-10 23:42:47.712531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.957 qpair failed and we were unable to recover it. 00:38:38.957 [2024-07-10 23:42:47.712785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.712798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.713030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.713043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.713317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.713343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.713567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.713580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.713751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.713764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.714007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.714020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.714216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.714230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.714386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.714400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.714556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.714570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.714850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.714863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.715088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.715102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.715272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.715286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.715537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.715550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.715654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.715667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.715944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.715958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.716185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.716199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.716382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.716395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.716652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.716665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.716842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.716855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.717100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.717113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.717302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.717316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.717562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.717576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.717824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.717837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.717952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.717965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.718219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.718235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.718444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.718457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.718711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.718725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.718999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.719013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.719116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.719128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.719393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.719406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.719635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.719649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.719771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.719784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.720027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.720041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.720284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.720297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.958 [2024-07-10 23:42:47.720470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.958 [2024-07-10 23:42:47.720484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.958 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.720682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.720695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.720926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.720939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.721168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.721181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.721296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.721309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.721418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.721431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.721705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.721719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.721966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.721978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.722242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.722255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.722501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.722519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.722752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.722765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.722964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.722977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.723152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.723168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.723349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.723363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.723614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.723627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.723863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.723876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.724131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.724145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.724453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.724476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.724782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.724807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.724993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.725012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.725200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.725218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.725459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.725478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.725682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.725700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.725950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.725968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.726252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.726270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.726486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.726505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.726692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.726710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.726994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.727012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.727224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.727243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.727426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.727444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.727722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.727744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.727923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.727941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.728198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.728217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.728429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.728448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.728704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.728722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.728958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.728977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.729224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.729243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.729432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.729450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.729694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.729712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.959 [2024-07-10 23:42:47.729909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.959 [2024-07-10 23:42:47.729927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.959 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.730233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.730252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.730492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.730511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.730749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.730767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.730979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.730997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.731196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.731212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.731377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.731390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.731590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.731603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.731850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.731863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.732097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.732110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.732286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.732300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.732525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.732538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.732789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.732802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.733063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.733076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.733331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.733345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.733470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.733483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.733717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.733730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.733904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.733918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.734142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.734156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.734423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.734436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.734628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.734641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.734830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.734843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.735069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.735082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.735315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.735329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.735591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.735604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.735851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.735864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.736042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.736055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.736302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.736316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.736522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.736535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.736749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.736761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.736883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.736897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.737139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.737154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.737387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.737401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.737526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.737539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.737706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.737719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.737918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.737931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.738119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.738132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.738356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.738369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.738544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.738557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.738753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.738766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.738991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.739004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.739253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.960 [2024-07-10 23:42:47.739266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.960 qpair failed and we were unable to recover it. 00:38:38.960 [2024-07-10 23:42:47.739438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.739451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.739679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.739692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.739984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.739997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.740177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.740190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.740442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.740455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.740708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.740721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.740956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.740970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.741137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.741150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.741399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.741413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.741611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.741624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.741823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.741836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.742039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.742052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.742319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.742333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.742503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.742516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.742691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.742705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.742812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.742825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.743001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.743014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.743208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.743221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.743491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.743506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.743763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.743781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.743961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.743975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.744169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.744182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.744437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.744450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.744636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.744649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.744820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.744834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.745084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.745097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.745368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.745382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.745638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.745652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.745776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.745789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.746038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.746053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.746291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.746305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.746509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.746522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.746797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.746810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.747039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.747052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.747221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.747234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.747340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.747353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.747610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.747623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.747801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.747814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.748060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-07-10 23:42:47.748073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.961 qpair failed and we were unable to recover it. 00:38:38.961 [2024-07-10 23:42:47.748315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.748329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.748513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.748527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.748779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.748792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.749017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.749031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.749284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.749297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.749556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.749569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.749802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.749816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.750042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.750055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.750280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.750294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.750497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.750510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.750678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.750690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.750877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.750890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.751115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.751128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.751402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.751416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.751617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.751630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.751856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.751869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.752144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.752157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.752397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.752410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.752638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.752651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.752838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.752852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.753100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.753113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.753345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.753359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.753543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.753558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.753728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.753742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.753993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.754006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.754252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.754266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.754513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.754526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.754778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.754791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.754991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.755004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.755203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.755216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.755492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.755507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.755763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.755776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.755967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.755980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.756142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.756155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.756324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.756337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.756577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.756590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.756815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.756828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.757002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.757016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.757254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.757268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.757496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.757509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.757756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-07-10 23:42:47.757769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.962 qpair failed and we were unable to recover it. 00:38:38.962 [2024-07-10 23:42:47.757928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.757941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.758103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.758116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.758355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.758369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.758559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.758579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.758833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.758847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.759027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.759041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.759267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.759280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.759483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.759496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.759721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.759735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.759989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.760002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.760228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.760242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.760493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.760506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.760734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.760747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.760970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.760983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.761225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.761239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.761514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.761528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.761703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.761716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.762009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.762022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.762266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.762280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.762546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.762559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.762664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.762677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.762925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.762937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.763095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.763108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.763361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.763375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.763491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.763504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.763677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.763690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.763934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.763947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.764195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.764208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.764378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.764391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.764551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.764566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.764845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.764858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.765036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.765049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.765224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.765238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.765483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.765496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.765624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-07-10 23:42:47.765637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.963 qpair failed and we were unable to recover it. 00:38:38.963 [2024-07-10 23:42:47.765830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.765843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.766089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.766102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.766348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.766362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.766636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.766650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.766898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.766911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.767184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.767197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.767424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.767436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.767677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.767690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.767867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.767880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.768104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.768117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.768309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.768323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.768574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.768587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.768714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.768727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.768900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.768913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.769026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.769040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.769226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.769239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.769492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.769505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.769677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.769691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.769865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.769878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.770138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.770152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.770440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.770454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.770654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.770667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.770790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.770803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.771007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.771020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.771307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.771320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.771571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.771584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.771809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.771822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.772000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.772013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.772260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.772273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.772499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.772512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.772765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.772777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.772890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.772903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.773151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.773177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.773417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.773430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.773694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.773709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.773902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.773915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.774097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.774110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.774374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.774388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.774644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.774657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.774831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.774844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.775069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.964 [2024-07-10 23:42:47.775082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.964 qpair failed and we were unable to recover it. 00:38:38.964 [2024-07-10 23:42:47.775241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.775255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.775458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.775472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.775636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.775649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.775848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.775861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.776030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.776043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.776264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.776278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.776459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.776472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.776648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.776662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.776766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.776779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.777027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.777040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.777215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.777228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.777428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.777441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.777682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.777695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.777990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.778003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.778182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.778196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.778422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.778435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.778684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.778697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.778815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.778828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.779074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.779088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.779261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.779275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.779526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.779539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.779666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.779679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.779915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.779928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.780133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.780146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.780315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.780329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.780512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.780526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.780773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.780786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.780958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.780971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.781218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.781232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.781480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.781494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.781672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.781685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.781933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.781946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.782134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.782147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.782361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.782380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.782556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.782570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.782830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.782843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.783113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.783127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.783352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.783365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.783537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.783551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.783669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.783682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.783842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.965 [2024-07-10 23:42:47.783856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.965 qpair failed and we were unable to recover it. 00:38:38.965 [2024-07-10 23:42:47.784013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.784031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.784256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.784270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.784504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.784517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.784637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.784650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.784826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.784840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.785001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.785014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.785191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.785205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.785399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.785412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.785605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.785618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.785810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.785823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.786055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.786068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.786320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.786334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.786587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.786600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.786820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.786833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.787086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.787122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.787303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.787317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.787579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.787592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.787763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.787776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.787960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.787973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.788183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.788207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.788348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.788367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.788613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.788632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.788895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.788914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.789085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.789104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.789304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.789324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.789586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.789605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.789795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.789813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.790061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.790079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.790263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.790282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.790416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.790434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.790623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.790641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.790829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.790849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.791082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.791104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.791380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.791398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.791656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.791676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.791906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.791931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.792188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.792206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.792457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.792475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.792744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.792762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.792985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.793003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.793260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.966 [2024-07-10 23:42:47.793279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.966 qpair failed and we were unable to recover it. 00:38:38.966 [2024-07-10 23:42:47.793449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.793467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.793739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.793758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.794021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.794040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.794275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.794294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.794537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.794556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.794692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.794710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.794833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.794851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.795135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.795153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.795417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.795436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.795627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.795646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.795782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.795800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.796057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.796074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.796308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.796326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.796445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.796464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.796707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.796726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.796989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.797008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.797242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.797260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.797460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.797478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.797703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.797732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.797918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.797940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.798246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.798262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.798436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.798450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.798698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.798712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.798872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.798885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.799113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.799126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.799302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.799315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.799509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.799523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.799675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.799688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.799879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.799892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.800064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.800078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.800344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.800358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.800479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.800494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.800715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.800728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.800894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.800907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.801151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.801167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.967 qpair failed and we were unable to recover it. 00:38:38.967 [2024-07-10 23:42:47.801369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.967 [2024-07-10 23:42:47.801382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.801561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.801574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.801806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.801819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.802013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.802026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.802277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.802290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.802415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.802428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.802684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.802698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.802879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.802892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.803142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.803156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.803386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.803399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.803657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.803672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.803901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.803914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.804089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.804102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.804329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.804342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.804544] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:38.968 [2024-07-10 23:42:47.804573] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:38.968 [2024-07-10 23:42:47.804586] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:38.968 [2024-07-10 23:42:47.804592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.804596] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:38.968 [2024-07-10 23:42:47.804606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe8[2024-07-10 23:42:47.804606] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:38.968 0 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.804841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.804853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.805021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.805035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.805041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:38:38.968 [2024-07-10 23:42:47.805133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:38:38.968 [2024-07-10 23:42:47.805205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.805208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:38:38.968 [2024-07-10 23:42:47.805219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.805230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:38:38.968 [2024-07-10 23:42:47.805403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.805417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.805618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.805631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.805905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.805918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.806150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.806168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.806399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.806412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.806686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.806699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.806881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.806895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.807162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.807176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.807445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.807459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.807718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.807731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.807908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.807922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.808131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.808145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.808313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.808327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.808497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.808511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.808748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.808761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.808880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.808896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.809099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.809113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.809392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.809406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.809579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.809592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.809722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.968 [2024-07-10 23:42:47.809735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.968 qpair failed and we were unable to recover it. 00:38:38.968 [2024-07-10 23:42:47.809987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.810001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.810180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.810194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.810472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.810486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.810684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.810698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.810939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.810952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.811154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.811175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.811355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.811374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.811558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.811571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.811811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.811825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.811999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.812012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.812263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.812277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.812530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.812544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.812785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.812800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.813047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.813061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.813237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.813251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.813499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.813513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.813765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.813778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.814022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.814036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.814226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.814239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.814464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.814478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.814735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.814749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.814907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.814920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.815041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.815054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.815304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.815318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.815500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.815514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.815712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.815726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.815968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.815981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.816230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.816244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.816472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.816486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.816660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.816674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.816932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.816946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.817120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.817133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.817451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.817469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.817632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.817647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.817900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.817915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.818169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.818187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.818437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.818451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.818653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.818666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.818903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.818916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.819169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.819182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.819357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.969 [2024-07-10 23:42:47.819370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.969 qpair failed and we were unable to recover it. 00:38:38.969 [2024-07-10 23:42:47.819546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.819560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.819816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.819828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.820026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.820039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.820222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.820236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.820441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.820454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.820620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.820633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.820814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.820827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.821078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.821090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.821268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.821282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.821468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.821481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.821760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.821773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.821949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.821962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.822070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.822083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.822255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.822268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.822429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.822443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.822717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.822734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.822960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.822973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.823189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.823202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.823329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.823342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.823532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.823544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.823639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.823652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.823909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.823921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.824168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.824181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.824343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.824356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.824520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.824534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.824720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.824733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.824892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.824904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.825165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.825179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.825309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.825322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.825559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.825580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.825743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.825756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.825928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.825941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.826194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.826208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.826460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.826473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.826644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.826660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.826831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.826844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.827097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.827110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.827225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.827238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.827395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.827408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.827664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.827677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.827853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.970 [2024-07-10 23:42:47.827866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.970 qpair failed and we were unable to recover it. 00:38:38.970 [2024-07-10 23:42:47.828042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.828055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.828313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.828326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.828578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.828591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.828751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.828764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.829013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.829025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.829194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.829207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.829448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.829461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.829715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.829728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.829919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.829933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.830230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.830244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.830366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.830379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.830628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.830641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.830814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.830826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.831018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.831032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.831237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.831251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.831537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.831550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.831807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.831820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.831997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.832009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.832233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.832246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.832476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.832490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.832775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.832788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.833037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.833050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.833255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.833269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.833443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.833457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.833640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.833653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.833908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.833921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.834175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.834189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.834364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.834376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.834486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.834499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.834728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.834741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.834912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.834925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.835085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.835098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.835372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.835385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.835561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.835577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.835769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.835781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.971 qpair failed and we were unable to recover it. 00:38:38.971 [2024-07-10 23:42:47.835955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.971 [2024-07-10 23:42:47.835968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.836167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.836180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.836373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.836386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.836612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.836625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.836794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.836807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.836929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.836941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.837105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.837118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.837313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.837327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.837517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.837530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.837800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.837813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.838075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.838088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.838206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.838220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.838452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.838465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.838581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.838595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.838801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.838814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.839065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.839078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.839191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.839205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.839455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.839480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.839667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.839680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.839863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.839876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.839994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.840007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.840104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.840116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.840289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.840302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.840535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.840547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.840719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.840732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.840961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.840974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.841146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.841168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.841443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.841456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.841642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.841655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.841854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.841867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.842040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.842053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.842212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.842226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.842496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.842510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.842691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.842703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.842816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.842828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.843077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.843090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.843341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.843354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.843544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.843558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.843811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.843826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.843997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.844010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.844260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.844273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-07-10 23:42:47.844502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.972 [2024-07-10 23:42:47.844514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.844709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.844722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.844975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.844988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.845234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.845247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.845423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.845436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.845623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.845635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.845803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.845816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.846041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.846053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.846281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.846294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.846402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.846415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.846659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.846672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.846926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.846939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.847121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.847134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.847312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.847325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.847448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.847460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.847694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.847708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.847910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.847924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.848095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.848108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.848348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.848361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.848561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.848574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.848801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.848814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.848927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.848940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.849113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.849125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.849355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.849368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.849601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.849615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.849876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.849892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.850167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.850181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.850434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.850447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.850563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.850576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.850812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.850824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.851087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.851100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.851345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.851358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.851530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.851543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.851793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.851807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.851979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.851992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.852231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.852245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.852424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.852437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.852605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.852620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.852797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.852809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.853067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.853080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.853328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.853351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-07-10 23:42:47.853517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.973 [2024-07-10 23:42:47.853530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.853784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.853797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.853992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.854005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.854281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.854294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.854551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.854564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.854673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.854686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.854804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.854817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.855062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.855075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.855322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.855336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.855609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.855622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.855868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.855881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.856078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.856091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.856342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.856356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.856495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.856508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.856620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.856633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.856811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.856824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.857002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.857015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.857201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.857214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.857373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.857387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.857514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.857529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.857696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.857709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.857835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.857848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.858028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.858042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.858212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.858226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.858470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.858484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.858683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.858697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.858893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.858906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.859031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.859044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.859243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.859258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.859456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.859470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.859647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.859660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.859787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.859800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.860031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.860044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.860219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.860233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.860430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.860443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.860708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.860721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.860861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.860877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.861053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.861066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.861306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.861319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.861481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.861494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.861722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.861735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-07-10 23:42:47.861971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.974 [2024-07-10 23:42:47.861983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.862250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.862263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.862358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.862371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.862571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.862584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.862710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.862723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.862983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.862996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.863259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.863273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.863520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.863533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.863788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.863801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.864042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.864055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.864213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.864227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.864428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.864441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.864668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.864681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.864909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.864922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.865149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.865172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.865331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.865344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.865519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.865532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.865727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.865740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.865987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.865999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.866187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.866201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.866376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.866389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.866588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.866601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.866732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.866750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.866923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.866936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.867123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.867135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.867328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.867342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.867520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.867534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.867777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.867789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.867900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.867912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.868188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.868202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.868345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.868359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.868611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.868624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.868848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.868861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.869032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.869044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.869317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.869330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.869502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.869518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.869711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.869724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.869911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.869924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.870043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.870056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.870167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.975 [2024-07-10 23:42:47.870180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.975 qpair failed and we were unable to recover it. 00:38:38.975 [2024-07-10 23:42:47.870363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.870375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.870548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.870561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.870785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.870798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.871049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.871062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.871317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.871330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.871502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.871515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.871641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.871653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.871915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.871928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.872096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.872109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.872366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.872380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.872522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.872535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.872774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.872787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.872958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.872971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.873220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.873233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.873427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.873441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.873636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.873648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.873877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.873890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.874056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.874069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.874232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.874246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.874374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.874387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.874567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.874580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.874761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.874773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.874954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.874967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.875155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.875180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.875412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.875425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.875530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.875542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.875652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.875665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.875864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.875877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.876129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.876141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.876430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.876444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.876552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.876564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.876761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.876777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.877027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.877039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.877201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.877214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.877464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.877477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.877591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.877606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.976 qpair failed and we were unable to recover it. 00:38:38.976 [2024-07-10 23:42:47.877810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.976 [2024-07-10 23:42:47.877822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.878024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.878036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.878224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.878237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.878491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.878507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.878738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.878751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.879004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.879017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.879214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.879228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.879496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.879509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.879692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.879705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.880002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.880016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.880248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.880272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.880509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.880524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.880713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.880727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.880925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.880940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.881189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.881204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.881389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.881403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.881694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.881709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.881961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.881974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.882213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.882228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.882364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.882378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.882521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.882535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.882649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.882662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.882777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.882790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.882951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.882964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.883186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.883201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.883326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.883340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.883459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.883473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.883645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.883660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.883833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.883846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.884124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.884138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.884393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.884409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.884655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.884669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.884851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.884865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.885127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.885142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.885446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.885460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.885758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.885773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.885950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.885963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.886172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.886186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.886359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.886373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.886592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.886609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.886786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.886800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.977 [2024-07-10 23:42:47.887033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.977 [2024-07-10 23:42:47.887047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.977 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.887302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.887317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.887543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.887556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.887665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.887679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.887954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.887968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.888198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.888212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.888446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.888460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.888734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.888748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.888921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.888934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.889212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.889227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.889411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.889424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.889677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.889691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.889975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.889989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.890216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.890231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.890465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.890478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.890660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.890674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.890769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.890783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.891035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.891048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.891303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.891317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.891474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.891488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.891658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.891671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.891926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.891940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.892186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.892200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.892375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.892388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.892623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.892637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.892821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.892834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.893011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.893025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.893211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.893225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.893470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.893484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.893682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.893695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.893953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.893967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.894148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.894166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.894362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.894380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.894614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.894634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.894910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.894924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.895127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.895141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.895415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.895429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.895685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.895700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.895949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.895966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.896199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.896213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.896373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.896386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.978 [2024-07-10 23:42:47.896506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.978 [2024-07-10 23:42:47.896519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.978 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.896747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.896761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.896996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.897010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.897289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.897305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.897415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.897429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.897657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.897671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.897855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.897869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.897968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.897982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.898217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.898231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.898474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.898488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.898662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.898676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.898853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.898867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.899073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.899087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.899333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.899348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.899591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.899604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.899804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.899818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.900014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.900028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.900154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.900173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.900404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.900418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.900597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.900611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.900861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.900875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.901054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.901068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.901248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.901263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.901512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.901525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.901724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.901741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.901982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.901997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.902245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.902259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.902438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.902453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.902707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.902721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.902908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.902921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.903175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.903191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.903420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.903433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.903688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.903702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.903814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.903827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.904022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.904035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.904262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.904277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.904448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.904463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.904709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.904722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.904950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.904963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.905228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.905241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.905403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.905416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.979 qpair failed and we were unable to recover it. 00:38:38.979 [2024-07-10 23:42:47.905670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.979 [2024-07-10 23:42:47.905684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.905790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.905802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.906055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.906068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.906320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.906334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.906564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.906577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.906747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.906760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.906987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.907000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.907125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.907138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.907300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.907313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.907559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.907572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.907772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.907785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.907949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.907962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.908156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.908173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.908431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.908444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.908709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.908722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.908980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.908997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.909273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.909286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.909543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.909556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.909783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.909796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.909960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.909972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.910189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.910202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.910434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.910447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.910691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.910704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.910980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.910997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.911176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.911189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.911370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.911383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.911631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.911644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.911829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.911842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.912014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.912027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.912206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.912219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.912479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.912492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.912740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.912753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.912987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.912999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.913273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.913287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.913408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.913421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.913665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.980 [2024-07-10 23:42:47.913678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.980 qpair failed and we were unable to recover it. 00:38:38.980 [2024-07-10 23:42:47.913852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.913865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.914092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.914105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.914263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.914277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.914512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.914525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.914776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.914789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.915040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.915052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.915296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.915309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.915576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.915589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.915819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.915831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.916075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.916087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.916341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.916354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.916618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.916630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.916858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.916871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.917124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.917137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.917310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.917324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.917567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.917580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.917806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.917819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.918095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.918108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.918345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.918358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.918586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.918599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.918804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.918817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.918987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.919000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.919169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.919182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.919439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.919452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.919701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.919713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.919885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.919898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.920146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.920162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.920321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.920336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.920573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.920587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.920860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.920873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.921030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.921043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.921321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.921334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.921567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.921580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.921816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.921828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.922058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.922070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.922347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.922360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.922582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.922594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.922840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.922852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.923127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.923140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.923415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.923428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.923677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.981 [2024-07-10 23:42:47.923689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.981 qpair failed and we were unable to recover it. 00:38:38.981 [2024-07-10 23:42:47.923863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.923875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.924150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.924167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.924351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.924369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.924615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.924628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.924882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.924895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.925134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.925147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.925394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.925407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.925580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.925593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.925817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.925829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.926015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.926027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.926231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.926244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.926510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.926523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.926635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.926647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.926746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.926758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.927005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.927017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.927176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.927189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.927428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.927441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.927714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.927726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.927989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.928002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.928255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.928268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.928445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.928458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.928637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.928650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.928912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.928926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.929177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.929190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.929416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.929429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.929605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.929618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.929729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.929744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.929931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.929944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.930168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.930181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.930420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.930433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.930628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.930640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.930867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.930880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.931131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.931144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.931373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.931386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.931640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.931653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.931908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.931920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.932157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.932175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.932419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.932432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.932604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.932617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.932868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.932880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.933079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.982 [2024-07-10 23:42:47.933091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.982 qpair failed and we were unable to recover it. 00:38:38.982 [2024-07-10 23:42:47.933266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.933279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.933477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.933490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.933676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.933689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.933941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.933954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.934179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.934192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.934387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.934400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.934578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.934591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.934789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.934801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.935054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.935067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.935290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.935304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.935558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.935571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.935795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.935807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.935994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.936007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.936254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.936267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.936512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.936525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.936774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.936787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.936891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.936905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.937084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.937096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.937324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.937337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.937534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.937547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.937793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.937806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.937969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.937982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.938229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.938243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.938445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.938458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.938650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.938665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.938858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.938877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.939044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.939056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.939306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.939319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.939432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.939445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.939712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.939725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.939988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.940001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.940246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.940260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.940502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.940515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.940637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.940649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.940901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.940913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.941086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.941099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.941357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.941370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.941532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.941545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.941808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.941821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.942096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.942109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.942202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.983 [2024-07-10 23:42:47.942215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.983 qpair failed and we were unable to recover it. 00:38:38.983 [2024-07-10 23:42:47.942480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.942493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.942652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.942664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.942896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.942909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.943209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.943222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.943428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.943441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.943676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.943689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.943915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.943927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.944107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.944120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.944322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.944335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.944568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.944581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.944761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.944774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.945001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.945014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.945305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.945318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.945536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.945549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.945678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.945690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.945805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.945818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.946075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.946087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.946252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.946265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.946516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.946530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.946702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.946715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.946971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.946983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.947230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.947243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.947408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.947420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.947589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.947602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.947854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.947869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.948119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.948132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.948291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.948304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.948473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.948486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.948762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.948775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.949006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.949019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.949261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.949274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.949472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.949485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.949715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.949728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.949925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.949938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.950178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.950191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.950374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.984 [2024-07-10 23:42:47.950387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.984 qpair failed and we were unable to recover it. 00:38:38.984 [2024-07-10 23:42:47.950582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.950594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.950820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.950832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.951089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.951102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.951283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.951296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.951551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.951564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.951740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.951752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.952003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.952016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.952266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.952279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.952520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.952532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.952782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.952794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.953018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.953030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.953290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.953305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.953474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.953492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.953669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.953682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.953906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.953919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.954176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.954189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.954473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.954486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.954688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.954700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.954942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.954954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.955203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.955216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.955382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.955394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.955552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.955565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.955848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.955860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.956094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.956107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.956355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.956367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.956615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.956628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.956734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.956746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.957002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.957014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.957263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.957278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.957513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.957526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.957779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.957792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.957970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.957983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.958167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.958181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.958352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.958365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.958558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.958570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.958694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.958707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.958931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.958944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.959168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.959181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.959479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.959492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.959736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.959748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.959975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.985 [2024-07-10 23:42:47.959988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.985 qpair failed and we were unable to recover it. 00:38:38.985 [2024-07-10 23:42:47.960244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.960257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.960510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.960522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.960647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.960660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.960894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.960907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.961158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.961176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.961420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.961433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.961695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.961707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.961946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.961958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.962139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.962152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.962410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.962423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.962649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.962662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.962838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.962850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.963078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.963091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.963330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.963344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.963627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.963667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.963968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.964004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 A controller has encountered a failure and is being reset. 00:38:38.986 [2024-07-10 23:42:47.964276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.964310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.964570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.964584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.964839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.964852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.964982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.964995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.965189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.965202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.965397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.965410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.965603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.965616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.965860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.965873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.966107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.966119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.966397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.966411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.966655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.966668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.966938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.966960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:38.986 qpair failed and we were unable to recover it. 00:38:38.986 [2024-07-10 23:42:47.967299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.986 [2024-07-10 23:42:47.967329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:38.986 [2024-07-10 23:42:47.967346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:38.986 [2024-07-10 23:42:47.967371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:38.986 [2024-07-10 23:42:47.967390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.986 [2024-07-10 23:42:47.967406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.986 [2024-07-10 23:42:47.967423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.986 Unable to reset the controller. 00:38:39.246 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:39.246 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:38:39.246 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:39.246 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:39.246 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:39.246 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:39.246 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:39.246 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.246 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:39.504 Malloc0 00:38:39.504 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.504 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:39.504 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.504 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:39.504 [2024-07-10 23:42:48.362973] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:39.504 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.504 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:39.504 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.504 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:39.504 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.504 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:39.504 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.504 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:39.504 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.504 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:39.504 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.504 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:39.504 [2024-07-10 23:42:48.395246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:39.504 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.505 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:39.505 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.505 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:39.505 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.505 23:42:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2673619 00:38:40.071 Controller properly reset. 00:38:45.336 Initializing NVMe Controllers 00:38:45.336 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:45.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:45.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:45.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:45.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:45.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:45.336 Initialization complete. Launching workers. 00:38:45.336 Starting thread on core 1 00:38:45.336 Starting thread on core 2 00:38:45.336 Starting thread on core 3 00:38:45.336 Starting thread on core 0 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:38:45.336 00:38:45.336 real 0m11.383s 00:38:45.336 user 0m35.639s 00:38:45.336 sys 0m5.825s 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:45.336 ************************************ 00:38:45.336 END TEST nvmf_target_disconnect_tc2 00:38:45.336 ************************************ 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:45.336 rmmod nvme_tcp 00:38:45.336 rmmod nvme_fabrics 00:38:45.336 rmmod nvme_keyring 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2674234 ']' 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2674234 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2674234 ']' 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2674234 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2674234 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2674234' 00:38:45.336 killing process with pid 2674234 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2674234 00:38:45.336 23:42:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2674234 00:38:46.718 23:42:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:46.718 23:42:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:46.718 23:42:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:46.718 23:42:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:46.718 23:42:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:46.718 23:42:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:46.718 23:42:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:46.718 23:42:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:48.621 23:42:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:48.621 00:38:48.621 real 0m20.689s 00:38:48.621 user 1m5.547s 00:38:48.621 sys 0m10.468s 00:38:48.621 23:42:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:48.621 23:42:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:48.621 ************************************ 00:38:48.621 END TEST nvmf_target_disconnect 00:38:48.621 ************************************ 00:38:48.621 23:42:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:38:48.621 23:42:57 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:38:48.621 23:42:57 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:48.621 23:42:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:48.621 23:42:57 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:38:48.621 00:38:48.621 real 30m5.705s 00:38:48.621 user 77m43.126s 00:38:48.621 sys 7m1.206s 00:38:48.621 23:42:57 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:48.621 23:42:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:48.621 ************************************ 00:38:48.621 END TEST nvmf_tcp 00:38:48.621 ************************************ 00:38:48.621 23:42:57 -- common/autotest_common.sh@1142 -- # return 0 00:38:48.621 23:42:57 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:38:48.621 23:42:57 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:48.621 23:42:57 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:48.621 23:42:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:48.621 23:42:57 -- common/autotest_common.sh@10 -- # set +x 00:38:48.621 ************************************ 00:38:48.621 START TEST spdkcli_nvmf_tcp 00:38:48.621 ************************************ 00:38:48.621 23:42:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:48.880 * Looking for test storage... 00:38:48.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:48.880 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2676063 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2676063 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2676063 ']' 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:48.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:48.881 23:42:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:38:48.881 [2024-07-10 23:42:57.796899] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:38:48.881 [2024-07-10 23:42:57.796990] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2676063 ] 00:38:48.881 EAL: No free 2048 kB hugepages reported on node 1 00:38:48.881 [2024-07-10 23:42:57.900155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:49.140 [2024-07-10 23:42:58.112723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:49.140 [2024-07-10 23:42:58.112734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:49.708 23:42:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:49.708 23:42:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:38:49.708 23:42:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:38:49.708 23:42:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:49.708 23:42:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:49.708 23:42:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:38:49.708 23:42:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:38:49.708 23:42:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:38:49.708 23:42:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:49.708 23:42:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:49.708 23:42:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:38:49.708 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:38:49.708 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:38:49.708 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:38:49.708 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:38:49.708 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:38:49.708 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:38:49.708 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:49.708 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:38:49.708 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:38:49.708 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:49.708 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:49.708 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:38:49.708 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:49.708 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:49.708 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:38:49.708 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:49.708 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:49.708 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:49.708 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:49.708 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:38:49.708 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:38:49.708 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:49.708 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:38:49.708 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:49.708 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:38:49.708 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:38:49.708 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:38:49.708 ' 00:38:52.242 [2024-07-10 23:43:01.177467] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:53.622 [2024-07-10 23:43:02.353598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:38:55.526 [2024-07-10 23:43:04.516405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:38:57.449 [2024-07-10 23:43:06.378420] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:38:58.822 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:38:58.822 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:38:58.822 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:38:58.822 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:38:58.822 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:38:58.822 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:38:58.822 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:38:58.822 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:58.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:38:58.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:38:58.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:58.822 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:58.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:38:58.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:58.822 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:58.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:38:58.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:58.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:58.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:58.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:58.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:38:58.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:38:58.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:58.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:38:58.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:58.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:38:58.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:38:58.823 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:38:59.080 23:43:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:38:59.080 23:43:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:59.080 23:43:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:59.080 23:43:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:38:59.080 23:43:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:59.080 23:43:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:59.080 23:43:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:38:59.080 23:43:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:38:59.338 23:43:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:38:59.338 23:43:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:38:59.338 23:43:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:38:59.338 23:43:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:59.338 23:43:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:59.338 23:43:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:38:59.338 23:43:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:59.338 23:43:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:59.338 23:43:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:38:59.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:38:59.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:59.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:38:59.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:38:59.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:38:59.338 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:38:59.338 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:59.338 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:38:59.338 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:38:59.338 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:38:59.338 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:38:59.338 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:38:59.338 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:38:59.338 ' 00:39:05.912 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:39:05.912 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:39:05.912 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:05.912 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:39:05.912 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:39:05.912 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:39:05.912 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:39:05.912 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:05.912 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:39:05.912 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:39:05.912 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:39:05.912 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:39:05.912 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:39:05.912 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:39:05.912 23:43:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:39:05.912 23:43:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:05.912 23:43:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:05.912 23:43:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2676063 00:39:05.912 23:43:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2676063 ']' 00:39:05.912 23:43:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2676063 00:39:05.912 23:43:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:39:05.912 23:43:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:05.912 23:43:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2676063 00:39:05.912 23:43:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:05.912 23:43:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:05.912 23:43:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2676063' 00:39:05.912 killing process with pid 2676063 00:39:05.912 23:43:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2676063 00:39:05.912 23:43:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2676063 00:39:06.481 23:43:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:39:06.481 23:43:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:39:06.481 23:43:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2676063 ']' 00:39:06.481 23:43:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2676063 00:39:06.481 23:43:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2676063 ']' 00:39:06.481 23:43:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2676063 00:39:06.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2676063) - No such process 00:39:06.481 23:43:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2676063 is not found' 00:39:06.481 Process with pid 2676063 is not found 00:39:06.481 23:43:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:39:06.481 23:43:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:39:06.481 23:43:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:39:06.481 00:39:06.481 real 0m17.646s 00:39:06.481 user 0m35.676s 00:39:06.481 sys 0m0.857s 00:39:06.481 23:43:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:06.481 23:43:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:06.481 ************************************ 00:39:06.481 END TEST spdkcli_nvmf_tcp 00:39:06.481 ************************************ 00:39:06.481 23:43:15 -- common/autotest_common.sh@1142 -- # return 0 00:39:06.481 23:43:15 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:06.481 23:43:15 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:06.481 23:43:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:06.481 23:43:15 -- common/autotest_common.sh@10 -- # set +x 00:39:06.481 ************************************ 00:39:06.481 START TEST nvmf_identify_passthru 00:39:06.481 ************************************ 00:39:06.481 23:43:15 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:06.481 * Looking for test storage... 00:39:06.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:06.481 23:43:15 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:06.481 23:43:15 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:06.481 23:43:15 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:06.481 23:43:15 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:06.481 23:43:15 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.481 23:43:15 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.481 23:43:15 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.481 23:43:15 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:06.481 23:43:15 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:06.481 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:06.481 23:43:15 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:06.481 23:43:15 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:06.481 23:43:15 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:06.481 23:43:15 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:06.481 23:43:15 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.481 23:43:15 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.482 23:43:15 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.482 23:43:15 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:06.482 23:43:15 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.482 23:43:15 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:39:06.482 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:06.482 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:06.482 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:06.482 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:06.482 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:06.482 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:06.482 23:43:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:06.482 23:43:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:06.482 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:06.482 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:06.482 23:43:15 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:39:06.482 23:43:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:11.752 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:39:11.753 Found 0000:86:00.0 (0x8086 - 0x159b) 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:39:11.753 Found 0000:86:00.1 (0x8086 - 0x159b) 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:39:11.753 Found net devices under 0000:86:00.0: cvl_0_0 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:39:11.753 Found net devices under 0000:86:00.1: cvl_0_1 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:11.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:11.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:39:11.753 00:39:11.753 --- 10.0.0.2 ping statistics --- 00:39:11.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.753 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:11.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:11.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:39:11.753 00:39:11.753 --- 10.0.0.1 ping statistics --- 00:39:11.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.753 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:11.753 23:43:20 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:11.753 23:43:20 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:39:11.753 23:43:20 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:11.753 23:43:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:11.753 23:43:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:39:11.753 23:43:20 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:39:11.753 23:43:20 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:39:11.753 23:43:20 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:39:11.753 23:43:20 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:39:11.753 23:43:20 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:39:11.753 23:43:20 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:39:11.754 23:43:20 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:39:11.754 23:43:20 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:39:11.754 23:43:20 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:39:11.754 23:43:20 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:39:11.754 23:43:20 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:39:11.754 23:43:20 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:39:11.754 23:43:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:39:11.754 23:43:20 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:39:11.754 23:43:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:39:11.754 23:43:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:39:11.754 23:43:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:39:11.754 EAL: No free 2048 kB hugepages reported on node 1 00:39:15.944 23:43:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:39:15.944 23:43:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:39:15.944 23:43:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:39:15.944 23:43:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:39:15.944 EAL: No free 2048 kB hugepages reported on node 1 00:39:20.133 23:43:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:39:20.133 23:43:29 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:39:20.133 23:43:29 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:20.133 23:43:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:20.133 23:43:29 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:39:20.133 23:43:29 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:20.133 23:43:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:20.133 23:43:29 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2683096 00:39:20.133 23:43:29 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:39:20.133 23:43:29 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:20.133 23:43:29 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2683096 00:39:20.133 23:43:29 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2683096 ']' 00:39:20.133 23:43:29 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:20.133 23:43:29 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:20.133 23:43:29 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:20.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:20.133 23:43:29 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:20.133 23:43:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:20.133 [2024-07-10 23:43:29.121910] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:39:20.133 [2024-07-10 23:43:29.122000] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:20.133 EAL: No free 2048 kB hugepages reported on node 1 00:39:20.392 [2024-07-10 23:43:29.232747] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:20.392 [2024-07-10 23:43:29.455295] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:20.392 [2024-07-10 23:43:29.455337] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:20.392 [2024-07-10 23:43:29.455349] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:20.392 [2024-07-10 23:43:29.455358] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:20.392 [2024-07-10 23:43:29.455367] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:20.392 [2024-07-10 23:43:29.455436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:20.392 [2024-07-10 23:43:29.455465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:20.392 [2024-07-10 23:43:29.455471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:20.392 [2024-07-10 23:43:29.455451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:20.960 23:43:29 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:20.960 23:43:29 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:39:20.960 23:43:29 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:39:20.960 23:43:29 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.960 23:43:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:20.960 INFO: Log level set to 20 00:39:20.960 INFO: Requests: 00:39:20.960 { 00:39:20.960 "jsonrpc": "2.0", 00:39:20.960 "method": "nvmf_set_config", 00:39:20.960 "id": 1, 00:39:20.960 "params": { 00:39:20.960 "admin_cmd_passthru": { 00:39:20.960 "identify_ctrlr": true 00:39:20.960 } 00:39:20.960 } 00:39:20.960 } 00:39:20.960 00:39:20.960 INFO: response: 00:39:20.960 { 00:39:20.960 "jsonrpc": "2.0", 00:39:20.960 "id": 1, 00:39:20.960 "result": true 00:39:20.960 } 00:39:20.960 00:39:20.960 23:43:29 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.960 23:43:29 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:39:20.960 23:43:29 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.960 23:43:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:20.960 INFO: Setting log level to 20 00:39:20.960 INFO: Setting log level to 20 00:39:20.960 INFO: Log level set to 20 00:39:20.960 INFO: Log level set to 20 00:39:20.960 INFO: Requests: 00:39:20.960 { 00:39:20.960 "jsonrpc": "2.0", 00:39:20.960 "method": "framework_start_init", 00:39:20.960 "id": 1 00:39:20.960 } 00:39:20.960 00:39:20.960 INFO: Requests: 00:39:20.960 { 00:39:20.960 "jsonrpc": "2.0", 00:39:20.960 "method": "framework_start_init", 00:39:20.960 "id": 1 00:39:20.960 } 00:39:20.960 00:39:21.529 [2024-07-10 23:43:30.294173] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:39:21.529 INFO: response: 00:39:21.529 { 00:39:21.529 "jsonrpc": "2.0", 00:39:21.529 "id": 1, 00:39:21.529 "result": true 00:39:21.529 } 00:39:21.529 00:39:21.529 INFO: response: 00:39:21.529 { 00:39:21.529 "jsonrpc": "2.0", 00:39:21.529 "id": 1, 00:39:21.529 "result": true 00:39:21.529 } 00:39:21.529 00:39:21.529 23:43:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.529 23:43:30 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:21.529 23:43:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.529 23:43:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:21.529 INFO: Setting log level to 40 00:39:21.529 INFO: Setting log level to 40 00:39:21.529 INFO: Setting log level to 40 00:39:21.529 [2024-07-10 23:43:30.312826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:21.529 23:43:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.529 23:43:30 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:39:21.529 23:43:30 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:21.529 23:43:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:21.529 23:43:30 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:39:21.529 23:43:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.529 23:43:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:24.812 Nvme0n1 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.812 23:43:33 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.812 23:43:33 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.812 23:43:33 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:24.812 [2024-07-10 23:43:33.273794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.812 23:43:33 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:24.812 [ 00:39:24.812 { 00:39:24.812 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:39:24.812 "subtype": "Discovery", 00:39:24.812 "listen_addresses": [], 00:39:24.812 "allow_any_host": true, 00:39:24.812 "hosts": [] 00:39:24.812 }, 00:39:24.812 { 00:39:24.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:39:24.812 "subtype": "NVMe", 00:39:24.812 "listen_addresses": [ 00:39:24.812 { 00:39:24.812 "trtype": "TCP", 00:39:24.812 "adrfam": "IPv4", 00:39:24.812 "traddr": "10.0.0.2", 00:39:24.812 "trsvcid": "4420" 00:39:24.812 } 00:39:24.812 ], 00:39:24.812 "allow_any_host": true, 00:39:24.812 "hosts": [], 00:39:24.812 "serial_number": "SPDK00000000000001", 00:39:24.812 "model_number": "SPDK bdev Controller", 00:39:24.812 "max_namespaces": 1, 00:39:24.812 "min_cntlid": 1, 00:39:24.812 "max_cntlid": 65519, 00:39:24.812 "namespaces": [ 00:39:24.812 { 00:39:24.812 "nsid": 1, 00:39:24.812 "bdev_name": "Nvme0n1", 00:39:24.812 "name": "Nvme0n1", 00:39:24.812 "nguid": "81CD5E1CB3A0493DAEAF2B033F9BC852", 00:39:24.812 "uuid": "81cd5e1c-b3a0-493d-aeaf-2b033f9bc852" 00:39:24.812 } 00:39:24.812 ] 00:39:24.812 } 00:39:24.812 ] 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.812 23:43:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:24.812 23:43:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:39:24.812 23:43:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:39:24.812 EAL: No free 2048 kB hugepages reported on node 1 00:39:24.812 23:43:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:39:24.812 23:43:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:24.812 23:43:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:39:24.812 23:43:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:39:24.812 EAL: No free 2048 kB hugepages reported on node 1 00:39:24.812 23:43:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:39:24.812 23:43:33 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:39:24.812 23:43:33 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:39:24.812 23:43:33 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.812 23:43:33 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:39:24.812 23:43:33 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:39:24.812 23:43:33 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:24.812 23:43:33 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:39:24.812 23:43:33 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:24.812 23:43:33 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:39:24.812 23:43:33 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:24.812 23:43:33 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:24.812 rmmod nvme_tcp 00:39:24.812 rmmod nvme_fabrics 00:39:24.812 rmmod nvme_keyring 00:39:24.812 23:43:33 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:24.812 23:43:33 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:39:24.812 23:43:33 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:39:24.812 23:43:33 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2683096 ']' 00:39:24.812 23:43:33 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2683096 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2683096 ']' 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2683096 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2683096 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2683096' 00:39:24.812 killing process with pid 2683096 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2683096 00:39:24.812 23:43:33 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2683096 00:39:28.098 23:43:36 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:28.098 23:43:36 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:28.098 23:43:36 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:28.098 23:43:36 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:28.098 23:43:36 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:28.098 23:43:36 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:28.098 23:43:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:28.098 23:43:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:29.477 23:43:38 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:29.478 00:39:29.478 real 0m23.159s 00:39:29.478 user 0m33.721s 00:39:29.478 sys 0m4.784s 00:39:29.478 23:43:38 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:29.478 23:43:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:29.478 ************************************ 00:39:29.478 END TEST nvmf_identify_passthru 00:39:29.478 ************************************ 00:39:29.478 23:43:38 -- common/autotest_common.sh@1142 -- # return 0 00:39:29.478 23:43:38 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:29.478 23:43:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:29.478 23:43:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:29.478 23:43:38 -- common/autotest_common.sh@10 -- # set +x 00:39:29.739 ************************************ 00:39:29.739 START TEST nvmf_dif 00:39:29.739 ************************************ 00:39:29.739 23:43:38 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:29.739 * Looking for test storage... 00:39:29.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:29.739 23:43:38 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:29.739 23:43:38 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:39:29.739 23:43:38 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:29.739 23:43:38 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:29.739 23:43:38 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:29.739 23:43:38 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:29.739 23:43:38 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:29.739 23:43:38 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:29.739 23:43:38 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:29.739 23:43:38 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:29.739 23:43:38 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:29.739 23:43:38 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:29.739 23:43:38 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:29.740 23:43:38 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:29.740 23:43:38 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:29.740 23:43:38 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:29.740 23:43:38 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.740 23:43:38 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.740 23:43:38 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.740 23:43:38 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:39:29.740 23:43:38 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:29.740 23:43:38 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:39:29.740 23:43:38 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:39:29.740 23:43:38 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:39:29.740 23:43:38 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:39:29.740 23:43:38 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:29.740 23:43:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:29.740 23:43:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:29.740 23:43:38 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:39:29.740 23:43:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:39:35.047 Found 0000:86:00.0 (0x8086 - 0x159b) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:39:35.047 Found 0000:86:00.1 (0x8086 - 0x159b) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:39:35.047 Found net devices under 0000:86:00.0: cvl_0_0 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:39:35.047 Found net devices under 0000:86:00.1: cvl_0_1 00:39:35.047 23:43:43 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:35.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:35.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:39:35.048 00:39:35.048 --- 10.0.0.2 ping statistics --- 00:39:35.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:35.048 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:35.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:35.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:39:35.048 00:39:35.048 --- 10.0.0.1 ping statistics --- 00:39:35.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:35.048 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:39:35.048 23:43:43 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:37.582 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:39:37.582 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:39:37.582 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:39:37.582 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:39:37.582 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:39:37.582 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:39:37.582 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:39:37.582 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:39:37.582 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:39:37.582 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:39:37.582 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:39:37.582 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:39:37.582 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:39:37.582 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:39:37.582 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:39:37.582 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:39:37.582 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:39:37.582 23:43:46 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:37.582 23:43:46 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:37.582 23:43:46 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:37.582 23:43:46 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:37.582 23:43:46 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:37.582 23:43:46 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:37.583 23:43:46 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:39:37.583 23:43:46 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:39:37.583 23:43:46 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:37.583 23:43:46 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:37.583 23:43:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:37.583 23:43:46 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2688781 00:39:37.583 23:43:46 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2688781 00:39:37.583 23:43:46 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:39:37.583 23:43:46 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2688781 ']' 00:39:37.583 23:43:46 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.583 23:43:46 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:37.583 23:43:46 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.583 23:43:46 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:37.583 23:43:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:37.583 [2024-07-10 23:43:46.406381] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:39:37.583 [2024-07-10 23:43:46.406465] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:37.583 EAL: No free 2048 kB hugepages reported on node 1 00:39:37.583 [2024-07-10 23:43:46.515079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.842 [2024-07-10 23:43:46.732600] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:37.842 [2024-07-10 23:43:46.732643] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:37.842 [2024-07-10 23:43:46.732654] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:37.842 [2024-07-10 23:43:46.732665] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:37.842 [2024-07-10 23:43:46.732674] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:37.842 [2024-07-10 23:43:46.732702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.411 23:43:47 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:38.411 23:43:47 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:39:38.411 23:43:47 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:38.411 23:43:47 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:38.411 23:43:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:38.411 23:43:47 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:38.411 23:43:47 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:39:38.411 23:43:47 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:39:38.411 23:43:47 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:38.411 23:43:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:38.411 [2024-07-10 23:43:47.213778] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:38.411 23:43:47 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:38.411 23:43:47 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:39:38.411 23:43:47 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:38.411 23:43:47 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:38.411 23:43:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:38.411 ************************************ 00:39:38.411 START TEST fio_dif_1_default 00:39:38.411 ************************************ 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:38.411 bdev_null0 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:38.411 [2024-07-10 23:43:47.282130] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:38.411 { 00:39:38.411 "params": { 00:39:38.411 "name": "Nvme$subsystem", 00:39:38.411 "trtype": "$TEST_TRANSPORT", 00:39:38.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:38.411 "adrfam": "ipv4", 00:39:38.411 "trsvcid": "$NVMF_PORT", 00:39:38.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:38.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:38.411 "hdgst": ${hdgst:-false}, 00:39:38.411 "ddgst": ${ddgst:-false} 00:39:38.411 }, 00:39:38.411 "method": "bdev_nvme_attach_controller" 00:39:38.411 } 00:39:38.411 EOF 00:39:38.411 )") 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:38.411 "params": { 00:39:38.411 "name": "Nvme0", 00:39:38.411 "trtype": "tcp", 00:39:38.411 "traddr": "10.0.0.2", 00:39:38.411 "adrfam": "ipv4", 00:39:38.411 "trsvcid": "4420", 00:39:38.411 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:38.411 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:38.411 "hdgst": false, 00:39:38.411 "ddgst": false 00:39:38.411 }, 00:39:38.411 "method": "bdev_nvme_attach_controller" 00:39:38.411 }' 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:38.411 23:43:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:38.670 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:38.670 fio-3.35 00:39:38.670 Starting 1 thread 00:39:38.670 EAL: No free 2048 kB hugepages reported on node 1 00:39:50.884 00:39:50.884 filename0: (groupid=0, jobs=1): err= 0: pid=2689367: Wed Jul 10 23:43:58 2024 00:39:50.884 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10013msec) 00:39:50.884 slat (nsec): min=6939, max=38110, avg=8834.29, stdev=2820.17 00:39:50.884 clat (usec): min=40830, max=42049, avg=41009.86, stdev=185.32 00:39:50.884 lat (usec): min=40837, max=42071, avg=41018.69, stdev=185.66 00:39:50.884 clat percentiles (usec): 00:39:50.884 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:39:50.884 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:50.884 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:50.884 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:50.884 | 99.99th=[42206] 00:39:50.884 bw ( KiB/s): min= 384, max= 416, per=99.51%, avg=388.80, stdev=11.72, samples=20 00:39:50.884 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:39:50.884 lat (msec) : 50=100.00% 00:39:50.884 cpu : usr=95.26%, sys=4.43%, ctx=13, majf=0, minf=1634 00:39:50.884 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.884 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.884 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:50.884 00:39:50.884 Run status group 0 (all jobs): 00:39:50.884 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10013-10013msec 00:39:50.884 ----------------------------------------------------- 00:39:50.884 Suppressions used: 00:39:50.884 count bytes template 00:39:50.884 1 8 /usr/src/fio/parse.c 00:39:50.884 1 8 libtcmalloc_minimal.so 00:39:50.884 1 904 libcrypto.so 00:39:50.884 ----------------------------------------------------- 00:39:50.884 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.884 00:39:50.884 real 0m12.376s 00:39:50.884 user 0m16.783s 00:39:50.884 sys 0m0.890s 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:50.884 ************************************ 00:39:50.884 END TEST fio_dif_1_default 00:39:50.884 ************************************ 00:39:50.884 23:43:59 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:39:50.884 23:43:59 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:39:50.884 23:43:59 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:50.884 23:43:59 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:50.884 23:43:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:50.884 ************************************ 00:39:50.884 START TEST fio_dif_1_multi_subsystems 00:39:50.884 ************************************ 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:50.884 bdev_null0 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:50.884 [2024-07-10 23:43:59.723951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:50.884 bdev_null1 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:50.884 { 00:39:50.884 "params": { 00:39:50.884 "name": "Nvme$subsystem", 00:39:50.884 "trtype": "$TEST_TRANSPORT", 00:39:50.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:50.884 "adrfam": "ipv4", 00:39:50.884 "trsvcid": "$NVMF_PORT", 00:39:50.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:50.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:50.884 "hdgst": ${hdgst:-false}, 00:39:50.884 "ddgst": ${ddgst:-false} 00:39:50.884 }, 00:39:50.884 "method": "bdev_nvme_attach_controller" 00:39:50.884 } 00:39:50.884 EOF 00:39:50.884 )") 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:50.884 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:50.885 { 00:39:50.885 "params": { 00:39:50.885 "name": "Nvme$subsystem", 00:39:50.885 "trtype": "$TEST_TRANSPORT", 00:39:50.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:50.885 "adrfam": "ipv4", 00:39:50.885 "trsvcid": "$NVMF_PORT", 00:39:50.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:50.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:50.885 "hdgst": ${hdgst:-false}, 00:39:50.885 "ddgst": ${ddgst:-false} 00:39:50.885 }, 00:39:50.885 "method": "bdev_nvme_attach_controller" 00:39:50.885 } 00:39:50.885 EOF 00:39:50.885 )") 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:50.885 "params": { 00:39:50.885 "name": "Nvme0", 00:39:50.885 "trtype": "tcp", 00:39:50.885 "traddr": "10.0.0.2", 00:39:50.885 "adrfam": "ipv4", 00:39:50.885 "trsvcid": "4420", 00:39:50.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:50.885 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:50.885 "hdgst": false, 00:39:50.885 "ddgst": false 00:39:50.885 }, 00:39:50.885 "method": "bdev_nvme_attach_controller" 00:39:50.885 },{ 00:39:50.885 "params": { 00:39:50.885 "name": "Nvme1", 00:39:50.885 "trtype": "tcp", 00:39:50.885 "traddr": "10.0.0.2", 00:39:50.885 "adrfam": "ipv4", 00:39:50.885 "trsvcid": "4420", 00:39:50.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:50.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:50.885 "hdgst": false, 00:39:50.885 "ddgst": false 00:39:50.885 }, 00:39:50.885 "method": "bdev_nvme_attach_controller" 00:39:50.885 }' 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:50.885 23:43:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:51.144 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:51.144 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:51.144 fio-3.35 00:39:51.144 Starting 2 threads 00:39:51.144 EAL: No free 2048 kB hugepages reported on node 1 00:40:03.367 00:40:03.367 filename0: (groupid=0, jobs=1): err= 0: pid=2691381: Wed Jul 10 23:44:11 2024 00:40:03.367 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10034msec) 00:40:03.367 slat (nsec): min=7079, max=36012, avg=9283.87, stdev=2843.89 00:40:03.367 clat (usec): min=40790, max=44448, avg=41092.87, stdev=375.69 00:40:03.367 lat (usec): min=40798, max=44484, avg=41102.15, stdev=376.39 00:40:03.367 clat percentiles (usec): 00:40:03.367 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:40:03.367 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:03.367 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:40:03.367 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:40:03.367 | 99.99th=[44303] 00:40:03.367 bw ( KiB/s): min= 384, max= 416, per=33.84%, avg=388.80, stdev=11.72, samples=20 00:40:03.367 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:40:03.367 lat (msec) : 50=100.00% 00:40:03.367 cpu : usr=97.72%, sys=2.00%, ctx=13, majf=0, minf=1634 00:40:03.367 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:03.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.367 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:03.367 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:03.367 filename1: (groupid=0, jobs=1): err= 0: pid=2691382: Wed Jul 10 23:44:11 2024 00:40:03.367 read: IOPS=189, BW=759KiB/s (777kB/s)(7600KiB/10011msec) 00:40:03.367 slat (nsec): min=7073, max=34348, avg=8711.24, stdev=2266.32 00:40:03.367 clat (usec): min=594, max=45907, avg=21050.29, stdev=20358.05 00:40:03.367 lat (usec): min=601, max=45941, avg=21059.00, stdev=20357.59 00:40:03.367 clat percentiles (usec): 00:40:03.367 | 1.00th=[ 603], 5.00th=[ 611], 10.00th=[ 619], 20.00th=[ 627], 00:40:03.367 | 30.00th=[ 635], 40.00th=[ 701], 50.00th=[40633], 60.00th=[41157], 00:40:03.367 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:40:03.367 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:40:03.367 | 99.99th=[45876] 00:40:03.367 bw ( KiB/s): min= 704, max= 768, per=66.11%, avg=758.40, stdev=21.02, samples=20 00:40:03.367 iops : min= 176, max= 192, avg=189.60, stdev= 5.26, samples=20 00:40:03.367 lat (usec) : 750=46.11%, 1000=3.79% 00:40:03.367 lat (msec) : 50=50.11% 00:40:03.367 cpu : usr=97.33%, sys=2.39%, ctx=13, majf=0, minf=1636 00:40:03.367 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:03.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.367 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:03.367 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:03.367 00:40:03.367 Run status group 0 (all jobs): 00:40:03.367 READ: bw=1147KiB/s (1174kB/s), 389KiB/s-759KiB/s (398kB/s-777kB/s), io=11.2MiB (11.8MB), run=10011-10034msec 00:40:03.367 ----------------------------------------------------- 00:40:03.367 Suppressions used: 00:40:03.367 count bytes template 00:40:03.367 2 16 /usr/src/fio/parse.c 00:40:03.367 1 8 libtcmalloc_minimal.so 00:40:03.367 1 904 libcrypto.so 00:40:03.367 ----------------------------------------------------- 00:40:03.367 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.367 00:40:03.367 real 0m12.680s 00:40:03.367 user 0m27.641s 00:40:03.367 sys 0m0.890s 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:03.367 23:44:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:03.367 ************************************ 00:40:03.367 END TEST fio_dif_1_multi_subsystems 00:40:03.367 ************************************ 00:40:03.367 23:44:12 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:40:03.367 23:44:12 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:40:03.367 23:44:12 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:03.367 23:44:12 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:03.367 23:44:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:03.626 ************************************ 00:40:03.626 START TEST fio_dif_rand_params 00:40:03.626 ************************************ 00:40:03.626 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:03.627 bdev_null0 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:03.627 [2024-07-10 23:44:12.476021] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:03.627 { 00:40:03.627 "params": { 00:40:03.627 "name": "Nvme$subsystem", 00:40:03.627 "trtype": "$TEST_TRANSPORT", 00:40:03.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:03.627 "adrfam": "ipv4", 00:40:03.627 "trsvcid": "$NVMF_PORT", 00:40:03.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:03.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:03.627 "hdgst": ${hdgst:-false}, 00:40:03.627 "ddgst": ${ddgst:-false} 00:40:03.627 }, 00:40:03.627 "method": "bdev_nvme_attach_controller" 00:40:03.627 } 00:40:03.627 EOF 00:40:03.627 )") 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:03.627 "params": { 00:40:03.627 "name": "Nvme0", 00:40:03.627 "trtype": "tcp", 00:40:03.627 "traddr": "10.0.0.2", 00:40:03.627 "adrfam": "ipv4", 00:40:03.627 "trsvcid": "4420", 00:40:03.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:03.627 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:03.627 "hdgst": false, 00:40:03.627 "ddgst": false 00:40:03.627 }, 00:40:03.627 "method": "bdev_nvme_attach_controller" 00:40:03.627 }' 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:03.627 23:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:03.886 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:03.886 ... 00:40:03.886 fio-3.35 00:40:03.886 Starting 3 threads 00:40:03.886 EAL: No free 2048 kB hugepages reported on node 1 00:40:10.451 00:40:10.451 filename0: (groupid=0, jobs=1): err= 0: pid=2693517: Wed Jul 10 23:44:18 2024 00:40:10.451 read: IOPS=237, BW=29.7MiB/s (31.1MB/s)(150MiB/5045msec) 00:40:10.451 slat (nsec): min=5035, max=30340, avg=12848.02, stdev=2408.55 00:40:10.451 clat (usec): min=4367, max=53032, avg=12582.33, stdev=11512.17 00:40:10.451 lat (usec): min=4377, max=53046, avg=12595.17, stdev=11512.13 00:40:10.451 clat percentiles (usec): 00:40:10.451 | 1.00th=[ 4621], 5.00th=[ 5014], 10.00th=[ 6325], 20.00th=[ 7242], 00:40:10.451 | 30.00th=[ 7767], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[10159], 00:40:10.451 | 70.00th=[10945], 80.00th=[11994], 90.00th=[13829], 95.00th=[49021], 00:40:10.451 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52167], 99.95th=[53216], 00:40:10.451 | 99.99th=[53216] 00:40:10.451 bw ( KiB/s): min=18688, max=39936, per=31.66%, avg=30617.60, stdev=6012.46, samples=10 00:40:10.451 iops : min= 146, max= 312, avg=239.20, stdev=46.97, samples=10 00:40:10.451 lat (msec) : 10=57.51%, 20=33.56%, 50=6.43%, 100=2.50% 00:40:10.451 cpu : usr=95.54%, sys=4.08%, ctx=37, majf=0, minf=1636 00:40:10.451 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:10.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.451 issued rwts: total=1198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:10.451 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:10.451 filename0: (groupid=0, jobs=1): err= 0: pid=2693518: Wed Jul 10 23:44:18 2024 00:40:10.451 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(167MiB/5043msec) 00:40:10.451 slat (nsec): min=5594, max=21661, avg=12836.32, stdev=2490.13 00:40:10.451 clat (usec): min=4104, max=89455, avg=11276.32, stdev=10021.81 00:40:10.451 lat (usec): min=4114, max=89465, avg=11289.16, stdev=10021.95 00:40:10.451 clat percentiles (usec): 00:40:10.451 | 1.00th=[ 4621], 5.00th=[ 4883], 10.00th=[ 5080], 20.00th=[ 6718], 00:40:10.451 | 30.00th=[ 7439], 40.00th=[ 7963], 50.00th=[ 8979], 60.00th=[ 9896], 00:40:10.451 | 70.00th=[10814], 80.00th=[11731], 90.00th=[13304], 95.00th=[47449], 00:40:10.451 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52691], 99.95th=[89654], 00:40:10.451 | 99.99th=[89654] 00:40:10.451 bw ( KiB/s): min=28416, max=37888, per=35.31%, avg=34150.40, stdev=2983.50, samples=10 00:40:10.451 iops : min= 222, max= 296, avg=266.80, stdev=23.31, samples=10 00:40:10.451 lat (msec) : 10=60.93%, 20=33.16%, 50=4.19%, 100=1.72% 00:40:10.451 cpu : usr=94.76%, sys=4.90%, ctx=6, majf=0, minf=1634 00:40:10.451 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:10.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.451 issued rwts: total=1336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:10.451 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:10.451 filename0: (groupid=0, jobs=1): err= 0: pid=2693519: Wed Jul 10 23:44:18 2024 00:40:10.451 read: IOPS=253, BW=31.7MiB/s (33.2MB/s)(160MiB/5044msec) 00:40:10.451 slat (nsec): min=7551, max=30434, avg=12509.99, stdev=2555.93 00:40:10.451 clat (usec): min=4313, max=56286, avg=11791.26, stdev=11020.60 00:40:10.451 lat (usec): min=4324, max=56302, avg=11803.77, stdev=11020.75 00:40:10.451 clat percentiles (usec): 00:40:10.451 | 1.00th=[ 4621], 5.00th=[ 4883], 10.00th=[ 5145], 20.00th=[ 6783], 00:40:10.451 | 30.00th=[ 7504], 40.00th=[ 8160], 50.00th=[ 9110], 60.00th=[ 9896], 00:40:10.451 | 70.00th=[10814], 80.00th=[11600], 90.00th=[13173], 95.00th=[49021], 00:40:10.451 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54789], 99.95th=[56361], 00:40:10.451 | 99.99th=[56361] 00:40:10.451 bw ( KiB/s): min=24064, max=40704, per=33.77%, avg=32665.60, stdev=5958.93, samples=10 00:40:10.451 iops : min= 188, max= 318, avg=255.20, stdev=46.55, samples=10 00:40:10.451 lat (msec) : 10=60.56%, 20=32.24%, 50=2.97%, 100=4.23% 00:40:10.451 cpu : usr=95.08%, sys=4.58%, ctx=12, majf=0, minf=1637 00:40:10.451 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:10.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.451 issued rwts: total=1278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:10.451 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:10.451 00:40:10.451 Run status group 0 (all jobs): 00:40:10.451 READ: bw=94.4MiB/s (99.0MB/s), 29.7MiB/s-33.1MiB/s (31.1MB/s-34.7MB/s), io=477MiB (500MB), run=5043-5045msec 00:40:11.021 ----------------------------------------------------- 00:40:11.021 Suppressions used: 00:40:11.021 count bytes template 00:40:11.021 5 44 /usr/src/fio/parse.c 00:40:11.021 1 8 libtcmalloc_minimal.so 00:40:11.021 1 904 libcrypto.so 00:40:11.021 ----------------------------------------------------- 00:40:11.021 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:11.021 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:11.022 bdev_null0 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:11.022 [2024-07-10 23:44:20.046062] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:11.022 bdev_null1 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:40:11.022 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:11.367 bdev_null2 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:11.367 23:44:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:11.367 { 00:40:11.367 "params": { 00:40:11.367 "name": "Nvme$subsystem", 00:40:11.367 "trtype": "$TEST_TRANSPORT", 00:40:11.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:11.367 "adrfam": "ipv4", 00:40:11.367 "trsvcid": "$NVMF_PORT", 00:40:11.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:11.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:11.367 "hdgst": ${hdgst:-false}, 00:40:11.367 "ddgst": ${ddgst:-false} 00:40:11.368 }, 00:40:11.368 "method": "bdev_nvme_attach_controller" 00:40:11.368 } 00:40:11.368 EOF 00:40:11.368 )") 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:11.368 { 00:40:11.368 "params": { 00:40:11.368 "name": "Nvme$subsystem", 00:40:11.368 "trtype": "$TEST_TRANSPORT", 00:40:11.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:11.368 "adrfam": "ipv4", 00:40:11.368 "trsvcid": "$NVMF_PORT", 00:40:11.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:11.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:11.368 "hdgst": ${hdgst:-false}, 00:40:11.368 "ddgst": ${ddgst:-false} 00:40:11.368 }, 00:40:11.368 "method": "bdev_nvme_attach_controller" 00:40:11.368 } 00:40:11.368 EOF 00:40:11.368 )") 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:11.368 { 00:40:11.368 "params": { 00:40:11.368 "name": "Nvme$subsystem", 00:40:11.368 "trtype": "$TEST_TRANSPORT", 00:40:11.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:11.368 "adrfam": "ipv4", 00:40:11.368 "trsvcid": "$NVMF_PORT", 00:40:11.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:11.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:11.368 "hdgst": ${hdgst:-false}, 00:40:11.368 "ddgst": ${ddgst:-false} 00:40:11.368 }, 00:40:11.368 "method": "bdev_nvme_attach_controller" 00:40:11.368 } 00:40:11.368 EOF 00:40:11.368 )") 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:11.368 "params": { 00:40:11.368 "name": "Nvme0", 00:40:11.368 "trtype": "tcp", 00:40:11.368 "traddr": "10.0.0.2", 00:40:11.368 "adrfam": "ipv4", 00:40:11.368 "trsvcid": "4420", 00:40:11.368 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:11.368 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:11.368 "hdgst": false, 00:40:11.368 "ddgst": false 00:40:11.368 }, 00:40:11.368 "method": "bdev_nvme_attach_controller" 00:40:11.368 },{ 00:40:11.368 "params": { 00:40:11.368 "name": "Nvme1", 00:40:11.368 "trtype": "tcp", 00:40:11.368 "traddr": "10.0.0.2", 00:40:11.368 "adrfam": "ipv4", 00:40:11.368 "trsvcid": "4420", 00:40:11.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:11.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:11.368 "hdgst": false, 00:40:11.368 "ddgst": false 00:40:11.368 }, 00:40:11.368 "method": "bdev_nvme_attach_controller" 00:40:11.368 },{ 00:40:11.368 "params": { 00:40:11.368 "name": "Nvme2", 00:40:11.368 "trtype": "tcp", 00:40:11.368 "traddr": "10.0.0.2", 00:40:11.368 "adrfam": "ipv4", 00:40:11.368 "trsvcid": "4420", 00:40:11.368 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:40:11.368 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:40:11.368 "hdgst": false, 00:40:11.368 "ddgst": false 00:40:11.368 }, 00:40:11.368 "method": "bdev_nvme_attach_controller" 00:40:11.368 }' 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:11.368 23:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:11.631 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:11.631 ... 00:40:11.631 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:11.631 ... 00:40:11.631 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:11.631 ... 00:40:11.631 fio-3.35 00:40:11.631 Starting 24 threads 00:40:11.631 EAL: No free 2048 kB hugepages reported on node 1 00:40:23.821 00:40:23.821 filename0: (groupid=0, jobs=1): err= 0: pid=2694869: Wed Jul 10 23:44:31 2024 00:40:23.821 read: IOPS=485, BW=1944KiB/s (1991kB/s)(19.0MiB/10013msec) 00:40:23.821 slat (usec): min=5, max=104, avg=42.99, stdev=26.61 00:40:23.821 clat (usec): min=12952, max=84941, avg=32495.27, stdev=3883.44 00:40:23.821 lat (usec): min=12970, max=84963, avg=32538.26, stdev=3886.55 00:40:23.821 clat percentiles (usec): 00:40:23.821 | 1.00th=[21890], 5.00th=[26608], 10.00th=[32113], 20.00th=[32375], 00:40:23.821 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:40:23.821 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:23.821 | 99.00th=[41157], 99.50th=[47449], 99.90th=[77071], 99.95th=[77071], 00:40:23.821 | 99.99th=[85459] 00:40:23.821 bw ( KiB/s): min= 1712, max= 2112, per=4.20%, avg=1940.00, stdev=92.55, samples=20 00:40:23.821 iops : min= 428, max= 528, avg=485.00, stdev=23.14, samples=20 00:40:23.821 lat (msec) : 20=0.49%, 50=99.18%, 100=0.33% 00:40:23.821 cpu : usr=98.73%, sys=0.85%, ctx=18, majf=0, minf=1633 00:40:23.821 IO depths : 1=5.0%, 2=10.2%, 4=21.3%, 8=55.5%, 16=8.0%, 32=0.0%, >=64=0.0% 00:40:23.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.821 complete : 0=0.0%, 4=93.2%, 8=1.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.821 issued rwts: total=4866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.821 filename0: (groupid=0, jobs=1): err= 0: pid=2694870: Wed Jul 10 23:44:31 2024 00:40:23.821 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10005msec) 00:40:23.821 slat (usec): min=7, max=107, avg=48.68, stdev=24.80 00:40:23.821 clat (usec): min=25846, max=82406, avg=32963.26, stdev=2934.62 00:40:23.821 lat (usec): min=25869, max=82432, avg=33011.93, stdev=2933.63 00:40:23.821 clat percentiles (usec): 00:40:23.821 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:40:23.821 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:40:23.821 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:23.821 | 99.00th=[34341], 99.50th=[34866], 99.90th=[82314], 99.95th=[82314], 00:40:23.821 | 99.99th=[82314] 00:40:23.821 bw ( KiB/s): min= 1664, max= 1920, per=4.12%, avg=1906.53, stdev=58.73, samples=19 00:40:23.821 iops : min= 416, max= 480, avg=476.63, stdev=14.68, samples=19 00:40:23.821 lat (msec) : 50=99.67%, 100=0.33% 00:40:23.821 cpu : usr=98.84%, sys=0.75%, ctx=14, majf=0, minf=1633 00:40:23.821 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:23.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.821 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.821 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.821 filename0: (groupid=0, jobs=1): err= 0: pid=2694871: Wed Jul 10 23:44:31 2024 00:40:23.821 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10014msec) 00:40:23.821 slat (usec): min=6, max=105, avg=41.72, stdev=23.89 00:40:23.821 clat (usec): min=19562, max=66613, avg=32953.13, stdev=2493.18 00:40:23.821 lat (usec): min=19571, max=66639, avg=32994.86, stdev=2492.21 00:40:23.821 clat percentiles (usec): 00:40:23.821 | 1.00th=[25822], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:40:23.821 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:23.821 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:40:23.821 | 99.00th=[34341], 99.50th=[53740], 99.90th=[66323], 99.95th=[66847], 00:40:23.821 | 99.99th=[66847] 00:40:23.821 bw ( KiB/s): min= 1792, max= 2096, per=4.14%, avg=1915.79, stdev=59.35, samples=19 00:40:23.821 iops : min= 448, max= 524, avg=478.95, stdev=14.84, samples=19 00:40:23.821 lat (msec) : 20=0.12%, 50=99.33%, 100=0.54% 00:40:23.821 cpu : usr=98.76%, sys=0.82%, ctx=15, majf=0, minf=1634 00:40:23.821 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:23.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.821 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.821 issued rwts: total=4806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.821 filename0: (groupid=0, jobs=1): err= 0: pid=2694872: Wed Jul 10 23:44:31 2024 00:40:23.821 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.8MiB/10028msec) 00:40:23.821 slat (nsec): min=6035, max=50298, avg=20999.40, stdev=5854.56 00:40:23.821 clat (usec): min=21055, max=68096, avg=33233.45, stdev=2321.84 00:40:23.821 lat (usec): min=21067, max=68119, avg=33254.45, stdev=2321.58 00:40:23.821 clat percentiles (usec): 00:40:23.821 | 1.00th=[30016], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:23.821 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:40:23.821 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:40:23.821 | 99.00th=[41157], 99.50th=[43779], 99.90th=[67634], 99.95th=[67634], 00:40:23.821 | 99.99th=[67634] 00:40:23.821 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1913.26, stdev=65.61, samples=19 00:40:23.821 iops : min= 448, max= 512, avg=478.32, stdev=16.40, samples=19 00:40:23.821 lat (msec) : 50=99.67%, 100=0.33% 00:40:23.821 cpu : usr=98.88%, sys=0.71%, ctx=14, majf=0, minf=1632 00:40:23.821 IO depths : 1=5.7%, 2=11.9%, 4=24.9%, 8=50.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:40:23.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.821 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.821 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.821 filename0: (groupid=0, jobs=1): err= 0: pid=2694873: Wed Jul 10 23:44:31 2024 00:40:23.821 read: IOPS=483, BW=1934KiB/s (1981kB/s)(18.9MiB/10013msec) 00:40:23.821 slat (nsec): min=7310, max=99689, avg=19101.93, stdev=11979.40 00:40:23.821 clat (usec): min=13597, max=85285, avg=32987.33, stdev=4302.30 00:40:23.821 lat (usec): min=13613, max=85310, avg=33006.43, stdev=4301.89 00:40:23.821 clat percentiles (usec): 00:40:23.822 | 1.00th=[21365], 5.00th=[27919], 10.00th=[29492], 20.00th=[32637], 00:40:23.822 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:40:23.822 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[35390], 00:40:23.822 | 99.00th=[44303], 99.50th=[51119], 99.90th=[85459], 99.95th=[85459], 00:40:23.822 | 99.99th=[85459] 00:40:23.822 bw ( KiB/s): min= 1664, max= 2016, per=4.18%, avg=1932.80, stdev=72.42, samples=20 00:40:23.822 iops : min= 416, max= 504, avg=483.20, stdev=18.10, samples=20 00:40:23.822 lat (msec) : 20=0.39%, 50=99.07%, 100=0.54% 00:40:23.822 cpu : usr=98.73%, sys=0.85%, ctx=18, majf=0, minf=1632 00:40:23.822 IO depths : 1=0.7%, 2=1.4%, 4=3.8%, 8=77.7%, 16=16.5%, 32=0.0%, >=64=0.0% 00:40:23.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.822 complete : 0=0.0%, 4=89.8%, 8=8.9%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.822 issued rwts: total=4842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.822 filename0: (groupid=0, jobs=1): err= 0: pid=2694875: Wed Jul 10 23:44:31 2024 00:40:23.822 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10014msec) 00:40:23.822 slat (nsec): min=5239, max=67784, avg=21873.15, stdev=6758.69 00:40:23.822 clat (usec): min=22987, max=84918, avg=33291.99, stdev=3085.29 00:40:23.822 lat (usec): min=22998, max=84943, avg=33313.86, stdev=3084.51 00:40:23.822 clat percentiles (usec): 00:40:23.822 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:23.822 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:40:23.822 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:40:23.822 | 99.00th=[34866], 99.50th=[41681], 99.90th=[84411], 99.95th=[84411], 00:40:23.822 | 99.99th=[84411] 00:40:23.822 bw ( KiB/s): min= 1664, max= 1920, per=4.12%, avg=1906.53, stdev=58.73, samples=19 00:40:23.822 iops : min= 416, max= 480, avg=476.63, stdev=14.68, samples=19 00:40:23.822 lat (msec) : 50=99.67%, 100=0.33% 00:40:23.822 cpu : usr=98.66%, sys=0.93%, ctx=23, majf=0, minf=1637 00:40:23.822 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:23.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.822 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.822 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.822 filename0: (groupid=0, jobs=1): err= 0: pid=2694876: Wed Jul 10 23:44:31 2024 00:40:23.822 read: IOPS=478, BW=1914KiB/s (1960kB/s)(18.7MiB/10010msec) 00:40:23.822 slat (usec): min=8, max=106, avg=41.48, stdev=24.64 00:40:23.822 clat (usec): min=21991, max=75900, avg=33039.34, stdev=2729.16 00:40:23.822 lat (usec): min=22001, max=75936, avg=33080.82, stdev=2727.54 00:40:23.822 clat percentiles (usec): 00:40:23.822 | 1.00th=[29492], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:40:23.822 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:40:23.822 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:40:23.822 | 99.00th=[34866], 99.50th=[47973], 99.90th=[76022], 99.95th=[76022], 00:40:23.822 | 99.99th=[76022] 00:40:23.822 bw ( KiB/s): min= 1667, max= 1968, per=4.13%, avg=1909.21, stdev=59.68, samples=19 00:40:23.822 iops : min= 416, max= 492, avg=477.26, stdev=15.09, samples=19 00:40:23.822 lat (msec) : 50=99.67%, 100=0.33% 00:40:23.822 cpu : usr=98.80%, sys=0.79%, ctx=13, majf=0, minf=1633 00:40:23.822 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:23.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.822 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.822 issued rwts: total=4790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.822 filename0: (groupid=0, jobs=1): err= 0: pid=2694877: Wed Jul 10 23:44:31 2024 00:40:23.822 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10014msec) 00:40:23.822 slat (nsec): min=4662, max=47432, avg=19748.27, stdev=6766.67 00:40:23.822 clat (usec): min=16624, max=87510, avg=33326.79, stdev=3254.95 00:40:23.822 lat (usec): min=16638, max=87528, avg=33346.53, stdev=3254.13 00:40:23.822 clat percentiles (usec): 00:40:23.822 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:40:23.822 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:40:23.822 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:40:23.822 | 99.00th=[34341], 99.50th=[35390], 99.90th=[87557], 99.95th=[87557], 00:40:23.822 | 99.99th=[87557] 00:40:23.822 bw ( KiB/s): min= 1664, max= 1920, per=4.12%, avg=1906.53, stdev=58.73, samples=19 00:40:23.822 iops : min= 416, max= 480, avg=476.63, stdev=14.68, samples=19 00:40:23.822 lat (msec) : 20=0.08%, 50=99.58%, 100=0.33% 00:40:23.822 cpu : usr=98.74%, sys=0.84%, ctx=16, majf=0, minf=1634 00:40:23.822 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:23.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.822 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.822 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.822 filename1: (groupid=0, jobs=1): err= 0: pid=2694878: Wed Jul 10 23:44:31 2024 00:40:23.822 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10020msec) 00:40:23.822 slat (usec): min=4, max=170, avg=44.58, stdev=25.96 00:40:23.822 clat (usec): min=18278, max=45498, avg=32934.59, stdev=1265.46 00:40:23.822 lat (usec): min=18287, max=45517, avg=32979.17, stdev=1263.89 00:40:23.822 clat percentiles (usec): 00:40:23.822 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:40:23.822 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:40:23.822 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:40:23.822 | 99.00th=[34341], 99.50th=[34866], 99.90th=[45351], 99.95th=[45351], 00:40:23.822 | 99.99th=[45351] 00:40:23.822 bw ( KiB/s): min= 1920, max= 1920, per=4.15%, avg=1920.00, stdev= 0.00, samples=19 00:40:23.822 iops : min= 480, max= 480, avg=480.00, stdev= 0.00, samples=19 00:40:23.822 lat (msec) : 20=0.33%, 50=99.67% 00:40:23.822 cpu : usr=98.77%, sys=0.82%, ctx=13, majf=0, minf=1637 00:40:23.822 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:23.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.822 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.822 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.822 filename1: (groupid=0, jobs=1): err= 0: pid=2694880: Wed Jul 10 23:44:31 2024 00:40:23.822 read: IOPS=505, BW=2023KiB/s (2072kB/s)(19.8MiB/10016msec) 00:40:23.822 slat (nsec): min=6886, max=94504, avg=15736.31, stdev=10872.28 00:40:23.822 clat (usec): min=13619, max=95875, avg=31547.01, stdev=5770.84 00:40:23.822 lat (usec): min=13629, max=95899, avg=31562.75, stdev=5771.10 00:40:23.822 clat percentiles (usec): 00:40:23.822 | 1.00th=[19530], 5.00th=[21627], 10.00th=[23200], 20.00th=[27657], 00:40:23.822 | 30.00th=[29492], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:40:23.822 | 70.00th=[33162], 80.00th=[33817], 90.00th=[36963], 95.00th=[39060], 00:40:23.822 | 99.00th=[43254], 99.50th=[47449], 99.90th=[80217], 99.95th=[80217], 00:40:23.822 | 99.99th=[95945] 00:40:23.822 bw ( KiB/s): min= 1680, max= 2288, per=4.38%, avg=2023.10, stdev=127.85, samples=20 00:40:23.822 iops : min= 420, max= 572, avg=505.75, stdev=31.94, samples=20 00:40:23.822 lat (msec) : 20=1.52%, 50=98.16%, 100=0.32% 00:40:23.822 cpu : usr=98.76%, sys=0.82%, ctx=14, majf=0, minf=1634 00:40:23.822 IO depths : 1=0.1%, 2=0.1%, 4=2.7%, 8=81.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:40:23.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.822 complete : 0=0.0%, 4=88.9%, 8=8.9%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.822 issued rwts: total=5066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.822 filename1: (groupid=0, jobs=1): err= 0: pid=2694881: Wed Jul 10 23:44:31 2024 00:40:23.822 read: IOPS=477, BW=1910KiB/s (1956kB/s)(18.7MiB/10017msec) 00:40:23.822 slat (usec): min=3, max=145, avg=21.76, stdev= 6.68 00:40:23.822 clat (msec): min=17, max=102, avg=33.31, stdev= 3.48 00:40:23.822 lat (msec): min=17, max=102, avg=33.33, stdev= 3.48 00:40:23.822 clat percentiles (msec): 00:40:23.822 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:23.822 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:40:23.822 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:40:23.822 | 99.00th=[ 35], 99.50th=[ 36], 99.90th=[ 90], 99.95th=[ 90], 00:40:23.822 | 99.99th=[ 104] 00:40:23.822 bw ( KiB/s): min= 1664, max= 1920, per=4.12%, avg=1906.53, stdev=58.73, samples=19 00:40:23.822 iops : min= 416, max= 480, avg=476.63, stdev=14.68, samples=19 00:40:23.822 lat (msec) : 20=0.04%, 50=99.62%, 100=0.29%, 250=0.04% 00:40:23.822 cpu : usr=98.82%, sys=0.75%, ctx=13, majf=0, minf=1635 00:40:23.822 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:23.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.822 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.822 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.822 filename1: (groupid=0, jobs=1): err= 0: pid=2694882: Wed Jul 10 23:44:31 2024 00:40:23.822 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10014msec) 00:40:23.822 slat (nsec): min=6295, max=47489, avg=21400.50, stdev=5793.59 00:40:23.822 clat (usec): min=12883, max=48742, avg=32967.39, stdev=2214.08 00:40:23.822 lat (usec): min=12894, max=48757, avg=32988.79, stdev=2214.61 00:40:23.822 clat percentiles (usec): 00:40:23.822 | 1.00th=[21890], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:23.822 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:40:23.822 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:40:23.822 | 99.00th=[34866], 99.50th=[41157], 99.90th=[44303], 99.95th=[44827], 00:40:23.822 | 99.99th=[48497] 00:40:23.822 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1926.74, stdev=29.37, samples=19 00:40:23.822 iops : min= 480, max= 512, avg=481.68, stdev= 7.34, samples=19 00:40:23.822 lat (msec) : 20=0.99%, 50=99.01% 00:40:23.822 cpu : usr=98.83%, sys=0.75%, ctx=12, majf=0, minf=1635 00:40:23.822 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:23.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.822 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.822 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.822 filename1: (groupid=0, jobs=1): err= 0: pid=2694883: Wed Jul 10 23:44:31 2024 00:40:23.822 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10011msec) 00:40:23.822 slat (usec): min=5, max=164, avg=23.53, stdev=11.55 00:40:23.822 clat (usec): min=16700, max=64165, avg=33135.31, stdev=2582.96 00:40:23.822 lat (usec): min=16717, max=64186, avg=33158.84, stdev=2582.13 00:40:23.822 clat percentiles (usec): 00:40:23.822 | 1.00th=[25035], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:23.822 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:40:23.822 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:40:23.822 | 99.00th=[38536], 99.50th=[54264], 99.90th=[64226], 99.95th=[64226], 00:40:23.823 | 99.99th=[64226] 00:40:23.823 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1914.11, stdev=71.60, samples=19 00:40:23.823 iops : min= 448, max= 512, avg=478.53, stdev=17.90, samples=19 00:40:23.823 lat (msec) : 20=0.04%, 50=99.42%, 100=0.54% 00:40:23.823 cpu : usr=98.83%, sys=0.75%, ctx=12, majf=0, minf=1636 00:40:23.823 IO depths : 1=5.7%, 2=11.7%, 4=24.2%, 8=51.5%, 16=6.9%, 32=0.0%, >=64=0.0% 00:40:23.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.823 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.823 issued rwts: total=4802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.823 filename1: (groupid=0, jobs=1): err= 0: pid=2694884: Wed Jul 10 23:44:31 2024 00:40:23.823 read: IOPS=476, BW=1908KiB/s (1954kB/s)(18.7MiB/10030msec) 00:40:23.823 slat (nsec): min=5080, max=50984, avg=22627.57, stdev=6701.89 00:40:23.823 clat (usec): min=16665, max=87880, avg=33297.85, stdev=3430.55 00:40:23.823 lat (usec): min=16675, max=87899, avg=33320.48, stdev=3429.73 00:40:23.823 clat percentiles (usec): 00:40:23.823 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32637], 00:40:23.823 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:40:23.823 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:40:23.823 | 99.00th=[35390], 99.50th=[48497], 99.90th=[87557], 99.95th=[87557], 00:40:23.823 | 99.99th=[87557] 00:40:23.823 bw ( KiB/s): min= 1648, max= 1936, per=4.12%, avg=1906.53, stdev=62.94, samples=19 00:40:23.823 iops : min= 412, max= 484, avg=476.63, stdev=15.73, samples=19 00:40:23.823 lat (msec) : 20=0.08%, 50=99.50%, 100=0.42% 00:40:23.823 cpu : usr=98.56%, sys=1.01%, ctx=14, majf=0, minf=1636 00:40:23.823 IO depths : 1=5.8%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:40:23.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.823 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.823 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.823 filename1: (groupid=0, jobs=1): err= 0: pid=2694886: Wed Jul 10 23:44:31 2024 00:40:23.823 read: IOPS=478, BW=1912KiB/s (1958kB/s)(18.7MiB/10008msec) 00:40:23.823 slat (usec): min=3, max=107, avg=49.34, stdev=24.27 00:40:23.823 clat (usec): min=25030, max=85196, avg=32981.21, stdev=3111.60 00:40:23.823 lat (usec): min=25049, max=85218, avg=33030.56, stdev=3110.14 00:40:23.823 clat percentiles (usec): 00:40:23.823 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:40:23.823 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:40:23.823 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:23.823 | 99.00th=[34341], 99.50th=[34866], 99.90th=[85459], 99.95th=[85459], 00:40:23.823 | 99.99th=[85459] 00:40:23.823 bw ( KiB/s): min= 1664, max= 1920, per=4.12%, avg=1906.53, stdev=58.73, samples=19 00:40:23.823 iops : min= 416, max= 480, avg=476.63, stdev=14.68, samples=19 00:40:23.823 lat (msec) : 50=99.67%, 100=0.33% 00:40:23.823 cpu : usr=98.76%, sys=0.82%, ctx=13, majf=0, minf=1636 00:40:23.823 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:23.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.823 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.823 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.823 filename1: (groupid=0, jobs=1): err= 0: pid=2694887: Wed Jul 10 23:44:31 2024 00:40:23.823 read: IOPS=482, BW=1928KiB/s (1975kB/s)(18.9MiB/10023msec) 00:40:23.823 slat (nsec): min=6824, max=43806, avg=19473.15, stdev=6633.68 00:40:23.823 clat (usec): min=13263, max=43640, avg=33023.05, stdev=1905.64 00:40:23.823 lat (usec): min=13281, max=43660, avg=33042.52, stdev=1905.81 00:40:23.823 clat percentiles (usec): 00:40:23.823 | 1.00th=[22414], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:23.823 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:40:23.823 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:40:23.823 | 99.00th=[34866], 99.50th=[35390], 99.90th=[43779], 99.95th=[43779], 00:40:23.823 | 99.99th=[43779] 00:40:23.823 bw ( KiB/s): min= 1920, max= 2052, per=4.17%, avg=1926.95, stdev=30.28, samples=19 00:40:23.823 iops : min= 480, max= 513, avg=481.74, stdev= 7.57, samples=19 00:40:23.823 lat (msec) : 20=0.66%, 50=99.34% 00:40:23.823 cpu : usr=98.84%, sys=0.73%, ctx=12, majf=0, minf=1637 00:40:23.823 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:23.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.823 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.823 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.823 filename2: (groupid=0, jobs=1): err= 0: pid=2694888: Wed Jul 10 23:44:31 2024 00:40:23.823 read: IOPS=478, BW=1912KiB/s (1958kB/s)(18.7MiB/10008msec) 00:40:23.823 slat (usec): min=3, max=185, avg=49.06, stdev=24.11 00:40:23.823 clat (usec): min=24532, max=85734, avg=32992.32, stdev=3156.41 00:40:23.823 lat (usec): min=24548, max=85751, avg=33041.37, stdev=3154.85 00:40:23.823 clat percentiles (usec): 00:40:23.823 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:40:23.823 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:40:23.823 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:23.823 | 99.00th=[34341], 99.50th=[39584], 99.90th=[85459], 99.95th=[85459], 00:40:23.823 | 99.99th=[85459] 00:40:23.823 bw ( KiB/s): min= 1664, max= 1920, per=4.12%, avg=1906.53, stdev=58.73, samples=19 00:40:23.823 iops : min= 416, max= 480, avg=476.63, stdev=14.68, samples=19 00:40:23.823 lat (msec) : 50=99.67%, 100=0.33% 00:40:23.823 cpu : usr=98.67%, sys=0.91%, ctx=14, majf=0, minf=1635 00:40:23.823 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:23.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.823 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.823 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.823 filename2: (groupid=0, jobs=1): err= 0: pid=2694889: Wed Jul 10 23:44:31 2024 00:40:23.823 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10013msec) 00:40:23.823 slat (usec): min=5, max=106, avg=41.69, stdev=24.03 00:40:23.823 clat (usec): min=19145, max=66181, avg=32956.81, stdev=2488.84 00:40:23.823 lat (usec): min=19155, max=66205, avg=32998.49, stdev=2487.72 00:40:23.823 clat percentiles (usec): 00:40:23.823 | 1.00th=[26084], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:40:23.823 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:23.823 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:40:23.823 | 99.00th=[34341], 99.50th=[52167], 99.90th=[66323], 99.95th=[66323], 00:40:23.823 | 99.99th=[66323] 00:40:23.823 bw ( KiB/s): min= 1792, max= 2096, per=4.14%, avg=1915.95, stdev=59.01, samples=19 00:40:23.823 iops : min= 448, max= 524, avg=478.95, stdev=14.84, samples=19 00:40:23.823 lat (msec) : 20=0.21%, 50=99.25%, 100=0.54% 00:40:23.823 cpu : usr=98.93%, sys=0.65%, ctx=17, majf=0, minf=1633 00:40:23.823 IO depths : 1=6.0%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:40:23.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.823 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.823 issued rwts: total=4806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.823 filename2: (groupid=0, jobs=1): err= 0: pid=2694891: Wed Jul 10 23:44:31 2024 00:40:23.823 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10013msec) 00:40:23.823 slat (nsec): min=4077, max=48909, avg=22367.29, stdev=6645.79 00:40:23.823 clat (usec): min=15614, max=85737, avg=33282.63, stdev=3282.69 00:40:23.823 lat (usec): min=15625, max=85758, avg=33305.00, stdev=3281.94 00:40:23.823 clat percentiles (usec): 00:40:23.823 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32637], 00:40:23.823 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:40:23.823 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:40:23.823 | 99.00th=[34341], 99.50th=[48497], 99.90th=[85459], 99.95th=[85459], 00:40:23.823 | 99.99th=[85459] 00:40:23.823 bw ( KiB/s): min= 1664, max= 1920, per=4.12%, avg=1906.53, stdev=58.73, samples=19 00:40:23.823 iops : min= 416, max= 480, avg=476.63, stdev=14.68, samples=19 00:40:23.823 lat (msec) : 20=0.25%, 50=99.37%, 100=0.38% 00:40:23.823 cpu : usr=98.69%, sys=0.90%, ctx=14, majf=0, minf=1636 00:40:23.823 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:23.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.823 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.823 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.823 filename2: (groupid=0, jobs=1): err= 0: pid=2694892: Wed Jul 10 23:44:31 2024 00:40:23.823 read: IOPS=478, BW=1916KiB/s (1962kB/s)(18.8MiB/10032msec) 00:40:23.823 slat (nsec): min=3653, max=94592, avg=39877.64, stdev=17333.12 00:40:23.823 clat (usec): min=18371, max=45472, avg=32994.74, stdev=1321.90 00:40:23.823 lat (usec): min=18387, max=45491, avg=33034.61, stdev=1320.75 00:40:23.823 clat percentiles (usec): 00:40:23.823 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:40:23.823 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:23.823 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:40:23.823 | 99.00th=[34866], 99.50th=[39584], 99.90th=[45351], 99.95th=[45351], 00:40:23.823 | 99.99th=[45351] 00:40:23.823 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1920.00, stdev=60.34, samples=19 00:40:23.823 iops : min= 448, max= 512, avg=480.00, stdev=15.08, samples=19 00:40:23.823 lat (msec) : 20=0.33%, 50=99.67% 00:40:23.823 cpu : usr=97.51%, sys=1.49%, ctx=95, majf=0, minf=1636 00:40:23.823 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:23.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.823 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.823 issued rwts: total=4805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.823 filename2: (groupid=0, jobs=1): err= 0: pid=2694893: Wed Jul 10 23:44:31 2024 00:40:23.823 read: IOPS=487, BW=1950KiB/s (1996kB/s)(19.1MiB/10012msec) 00:40:23.823 slat (nsec): min=4883, max=46873, avg=11400.25, stdev=3949.37 00:40:23.823 clat (usec): min=2984, max=48551, avg=32719.63, stdev=3673.21 00:40:23.823 lat (usec): min=2996, max=48561, avg=32731.03, stdev=3673.06 00:40:23.823 clat percentiles (usec): 00:40:23.823 | 1.00th=[ 9372], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:40:23.823 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:40:23.823 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:40:23.823 | 99.00th=[35390], 99.50th=[41681], 99.90th=[44827], 99.95th=[47973], 00:40:23.823 | 99.99th=[48497] 00:40:23.823 bw ( KiB/s): min= 1792, max= 2432, per=4.21%, avg=1946.95, stdev=124.97, samples=19 00:40:23.823 iops : min= 448, max= 608, avg=486.74, stdev=31.24, samples=19 00:40:23.823 lat (msec) : 4=0.33%, 10=0.98%, 20=1.07%, 50=97.62% 00:40:23.824 cpu : usr=98.91%, sys=0.66%, ctx=13, majf=0, minf=1639 00:40:23.824 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:23.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.824 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.824 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.824 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.824 filename2: (groupid=0, jobs=1): err= 0: pid=2694894: Wed Jul 10 23:44:31 2024 00:40:23.824 read: IOPS=486, BW=1944KiB/s (1991kB/s)(19.0MiB/10008msec) 00:40:23.824 slat (usec): min=3, max=101, avg=18.91, stdev=17.92 00:40:23.824 clat (usec): min=3033, max=48673, avg=32769.69, stdev=3340.13 00:40:23.824 lat (usec): min=3045, max=48688, avg=32788.60, stdev=3340.41 00:40:23.824 clat percentiles (usec): 00:40:23.824 | 1.00th=[11207], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:40:23.824 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:40:23.824 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:40:23.824 | 99.00th=[34341], 99.50th=[34866], 99.90th=[48497], 99.95th=[48497], 00:40:23.824 | 99.99th=[48497] 00:40:23.824 bw ( KiB/s): min= 1792, max= 2304, per=4.21%, avg=1946.95, stdev=100.78, samples=19 00:40:23.824 iops : min= 448, max= 576, avg=486.74, stdev=25.19, samples=19 00:40:23.824 lat (msec) : 4=0.33%, 10=0.66%, 20=0.80%, 50=98.21% 00:40:23.824 cpu : usr=98.97%, sys=0.62%, ctx=12, majf=0, minf=1635 00:40:23.824 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:23.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.824 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.824 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.824 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.824 filename2: (groupid=0, jobs=1): err= 0: pid=2694895: Wed Jul 10 23:44:31 2024 00:40:23.824 read: IOPS=483, BW=1934KiB/s (1981kB/s)(18.9MiB/10013msec) 00:40:23.824 slat (usec): min=8, max=107, avg=44.97, stdev=26.17 00:40:23.824 clat (usec): min=12973, max=76807, avg=32619.65, stdev=3510.17 00:40:23.824 lat (usec): min=12991, max=76817, avg=32664.62, stdev=3513.05 00:40:23.824 clat percentiles (usec): 00:40:23.824 | 1.00th=[21890], 5.00th=[28705], 10.00th=[32113], 20.00th=[32375], 00:40:23.824 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:40:23.824 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:23.824 | 99.00th=[40109], 99.50th=[47973], 99.90th=[77071], 99.95th=[77071], 00:40:23.824 | 99.99th=[77071] 00:40:23.824 bw ( KiB/s): min= 1664, max= 2080, per=4.18%, avg=1930.40, stdev=80.80, samples=20 00:40:23.824 iops : min= 416, max= 520, avg=482.60, stdev=20.20, samples=20 00:40:23.824 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:40:23.824 cpu : usr=98.83%, sys=0.76%, ctx=13, majf=0, minf=1635 00:40:23.824 IO depths : 1=5.7%, 2=11.4%, 4=23.5%, 8=52.5%, 16=7.0%, 32=0.0%, >=64=0.0% 00:40:23.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.824 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.824 issued rwts: total=4842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.824 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.824 filename2: (groupid=0, jobs=1): err= 0: pid=2694896: Wed Jul 10 23:44:31 2024 00:40:23.824 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.4MiB/10011msec) 00:40:23.824 slat (usec): min=8, max=193, avg=34.32, stdev=26.13 00:40:23.824 clat (usec): min=13018, max=75311, avg=31894.54, stdev=4904.33 00:40:23.824 lat (usec): min=13049, max=75350, avg=31928.87, stdev=4909.43 00:40:23.824 clat percentiles (usec): 00:40:23.824 | 1.00th=[20841], 5.00th=[22152], 10.00th=[24773], 20.00th=[31589], 00:40:23.824 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:40:23.824 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[36963], 00:40:23.824 | 99.00th=[47449], 99.50th=[48497], 99.90th=[74974], 99.95th=[74974], 00:40:23.824 | 99.99th=[74974] 00:40:23.824 bw ( KiB/s): min= 1667, max= 2288, per=4.30%, avg=1985.75, stdev=133.58, samples=20 00:40:23.824 iops : min= 416, max= 572, avg=496.40, stdev=33.49, samples=20 00:40:23.824 lat (msec) : 20=0.44%, 50=99.24%, 100=0.32% 00:40:23.824 cpu : usr=98.59%, sys=0.99%, ctx=15, majf=0, minf=1635 00:40:23.824 IO depths : 1=1.3%, 2=4.9%, 4=15.6%, 8=65.6%, 16=12.6%, 32=0.0%, >=64=0.0% 00:40:23.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.824 complete : 0=0.0%, 4=92.0%, 8=3.8%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.824 issued rwts: total=4978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.824 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:23.824 00:40:23.824 Run status group 0 (all jobs): 00:40:23.824 READ: bw=45.1MiB/s (47.3MB/s), 1908KiB/s-2023KiB/s (1954kB/s-2072kB/s), io=453MiB (475MB), run=10005-10032msec 00:40:24.082 ----------------------------------------------------- 00:40:24.082 Suppressions used: 00:40:24.082 count bytes template 00:40:24.082 45 402 /usr/src/fio/parse.c 00:40:24.082 1 8 libtcmalloc_minimal.so 00:40:24.082 1 904 libcrypto.so 00:40:24.082 ----------------------------------------------------- 00:40:24.082 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:40:24.082 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.083 bdev_null0 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.083 [2024-07-10 23:44:33.143812] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:24.083 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:24.341 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:24.341 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:24.341 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.341 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.341 bdev_null1 00:40:24.341 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.341 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:24.341 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.341 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.341 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:24.342 { 00:40:24.342 "params": { 00:40:24.342 "name": "Nvme$subsystem", 00:40:24.342 "trtype": "$TEST_TRANSPORT", 00:40:24.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:24.342 "adrfam": "ipv4", 00:40:24.342 "trsvcid": "$NVMF_PORT", 00:40:24.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:24.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:24.342 "hdgst": ${hdgst:-false}, 00:40:24.342 "ddgst": ${ddgst:-false} 00:40:24.342 }, 00:40:24.342 "method": "bdev_nvme_attach_controller" 00:40:24.342 } 00:40:24.342 EOF 00:40:24.342 )") 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:24.342 { 00:40:24.342 "params": { 00:40:24.342 "name": "Nvme$subsystem", 00:40:24.342 "trtype": "$TEST_TRANSPORT", 00:40:24.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:24.342 "adrfam": "ipv4", 00:40:24.342 "trsvcid": "$NVMF_PORT", 00:40:24.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:24.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:24.342 "hdgst": ${hdgst:-false}, 00:40:24.342 "ddgst": ${ddgst:-false} 00:40:24.342 }, 00:40:24.342 "method": "bdev_nvme_attach_controller" 00:40:24.342 } 00:40:24.342 EOF 00:40:24.342 )") 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:24.342 "params": { 00:40:24.342 "name": "Nvme0", 00:40:24.342 "trtype": "tcp", 00:40:24.342 "traddr": "10.0.0.2", 00:40:24.342 "adrfam": "ipv4", 00:40:24.342 "trsvcid": "4420", 00:40:24.342 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:24.342 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:24.342 "hdgst": false, 00:40:24.342 "ddgst": false 00:40:24.342 }, 00:40:24.342 "method": "bdev_nvme_attach_controller" 00:40:24.342 },{ 00:40:24.342 "params": { 00:40:24.342 "name": "Nvme1", 00:40:24.342 "trtype": "tcp", 00:40:24.342 "traddr": "10.0.0.2", 00:40:24.342 "adrfam": "ipv4", 00:40:24.342 "trsvcid": "4420", 00:40:24.342 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:24.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:24.342 "hdgst": false, 00:40:24.342 "ddgst": false 00:40:24.342 }, 00:40:24.342 "method": "bdev_nvme_attach_controller" 00:40:24.342 }' 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:24.342 23:44:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:24.600 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:24.600 ... 00:40:24.600 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:24.600 ... 00:40:24.600 fio-3.35 00:40:24.600 Starting 4 threads 00:40:24.600 EAL: No free 2048 kB hugepages reported on node 1 00:40:31.149 00:40:31.149 filename0: (groupid=0, jobs=1): err= 0: pid=2696980: Wed Jul 10 23:44:39 2024 00:40:31.149 read: IOPS=2219, BW=17.3MiB/s (18.2MB/s)(86.7MiB/5001msec) 00:40:31.149 slat (nsec): min=7293, max=33607, avg=11758.77, stdev=4052.06 00:40:31.149 clat (usec): min=706, max=6560, avg=3568.83, stdev=641.78 00:40:31.149 lat (usec): min=714, max=6576, avg=3580.59, stdev=641.45 00:40:31.149 clat percentiles (usec): 00:40:31.149 | 1.00th=[ 2343], 5.00th=[ 2737], 10.00th=[ 2966], 20.00th=[ 3163], 00:40:31.149 | 30.00th=[ 3261], 40.00th=[ 3425], 50.00th=[ 3490], 60.00th=[ 3556], 00:40:31.149 | 70.00th=[ 3621], 80.00th=[ 3785], 90.00th=[ 4490], 95.00th=[ 4948], 00:40:31.149 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 6194], 99.95th=[ 6325], 00:40:31.149 | 99.99th=[ 6521] 00:40:31.149 bw ( KiB/s): min=17216, max=18272, per=24.85%, avg=17829.33, stdev=334.66, samples=9 00:40:31.149 iops : min= 2152, max= 2284, avg=2228.67, stdev=41.83, samples=9 00:40:31.149 lat (usec) : 750=0.03%, 1000=0.05% 00:40:31.149 lat (msec) : 2=0.25%, 4=83.73%, 10=15.94% 00:40:31.149 cpu : usr=96.34%, sys=3.26%, ctx=8, majf=0, minf=1634 00:40:31.149 IO depths : 1=0.4%, 2=5.0%, 4=66.7%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:31.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.149 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.149 issued rwts: total=11099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:31.149 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:31.149 filename0: (groupid=0, jobs=1): err= 0: pid=2696981: Wed Jul 10 23:44:39 2024 00:40:31.149 read: IOPS=2329, BW=18.2MiB/s (19.1MB/s)(91.0MiB/5002msec) 00:40:31.149 slat (nsec): min=7186, max=35743, avg=11498.47, stdev=3992.29 00:40:31.149 clat (usec): min=775, max=10588, avg=3399.05, stdev=653.89 00:40:31.149 lat (usec): min=789, max=10617, avg=3410.54, stdev=653.88 00:40:31.149 clat percentiles (usec): 00:40:31.149 | 1.00th=[ 2057], 5.00th=[ 2507], 10.00th=[ 2704], 20.00th=[ 2966], 00:40:31.149 | 30.00th=[ 3130], 40.00th=[ 3261], 50.00th=[ 3392], 60.00th=[ 3490], 00:40:31.149 | 70.00th=[ 3556], 80.00th=[ 3621], 90.00th=[ 4080], 95.00th=[ 4752], 00:40:31.149 | 99.00th=[ 5538], 99.50th=[ 5866], 99.90th=[ 6521], 99.95th=[10290], 00:40:31.149 | 99.99th=[10290] 00:40:31.149 bw ( KiB/s): min=17920, max=20272, per=25.93%, avg=18604.44, stdev=792.76, samples=9 00:40:31.149 iops : min= 2240, max= 2534, avg=2325.56, stdev=99.09, samples=9 00:40:31.149 lat (usec) : 1000=0.01% 00:40:31.149 lat (msec) : 2=0.80%, 4=88.67%, 10=10.45%, 20=0.07% 00:40:31.149 cpu : usr=96.00%, sys=3.62%, ctx=15, majf=0, minf=1638 00:40:31.149 IO depths : 1=0.3%, 2=5.4%, 4=67.0%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:31.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.149 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.149 issued rwts: total=11652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:31.149 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:31.149 filename1: (groupid=0, jobs=1): err= 0: pid=2696982: Wed Jul 10 23:44:39 2024 00:40:31.149 read: IOPS=2188, BW=17.1MiB/s (17.9MB/s)(85.5MiB/5002msec) 00:40:31.149 slat (usec): min=7, max=133, avg=11.82, stdev= 4.24 00:40:31.149 clat (usec): min=659, max=8397, avg=3620.81, stdev=653.96 00:40:31.149 lat (usec): min=673, max=8427, avg=3632.63, stdev=653.65 00:40:31.149 clat percentiles (usec): 00:40:31.149 | 1.00th=[ 2376], 5.00th=[ 2802], 10.00th=[ 2999], 20.00th=[ 3195], 00:40:31.149 | 30.00th=[ 3326], 40.00th=[ 3458], 50.00th=[ 3523], 60.00th=[ 3556], 00:40:31.149 | 70.00th=[ 3654], 80.00th=[ 3949], 90.00th=[ 4555], 95.00th=[ 5014], 00:40:31.149 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6587], 99.95th=[ 8094], 00:40:31.149 | 99.99th=[ 8356] 00:40:31.149 bw ( KiB/s): min=16448, max=17904, per=24.32%, avg=17445.33, stdev=532.59, samples=9 00:40:31.149 iops : min= 2056, max= 2238, avg=2180.67, stdev=66.57, samples=9 00:40:31.149 lat (usec) : 750=0.02%, 1000=0.01% 00:40:31.149 lat (msec) : 2=0.26%, 4=80.86%, 10=18.86% 00:40:31.149 cpu : usr=95.94%, sys=3.68%, ctx=12, majf=0, minf=1635 00:40:31.149 IO depths : 1=0.3%, 2=4.2%, 4=67.1%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:31.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.149 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.149 issued rwts: total=10945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:31.149 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:31.149 filename1: (groupid=0, jobs=1): err= 0: pid=2696983: Wed Jul 10 23:44:39 2024 00:40:31.149 read: IOPS=2230, BW=17.4MiB/s (18.3MB/s)(87.2MiB/5002msec) 00:40:31.149 slat (usec): min=4, max=123, avg=11.68, stdev= 4.11 00:40:31.149 clat (usec): min=1147, max=6878, avg=3552.29, stdev=641.28 00:40:31.149 lat (usec): min=1162, max=6896, avg=3563.97, stdev=641.21 00:40:31.149 clat percentiles (usec): 00:40:31.149 | 1.00th=[ 2245], 5.00th=[ 2737], 10.00th=[ 2933], 20.00th=[ 3163], 00:40:31.149 | 30.00th=[ 3261], 40.00th=[ 3392], 50.00th=[ 3490], 60.00th=[ 3556], 00:40:31.149 | 70.00th=[ 3589], 80.00th=[ 3785], 90.00th=[ 4424], 95.00th=[ 5014], 00:40:31.149 | 99.00th=[ 5604], 99.50th=[ 5866], 99.90th=[ 6587], 99.95th=[ 6587], 00:40:31.149 | 99.99th=[ 6849] 00:40:31.149 bw ( KiB/s): min=17680, max=18272, per=24.98%, avg=17923.56, stdev=195.27, samples=9 00:40:31.149 iops : min= 2210, max= 2284, avg=2240.44, stdev=24.41, samples=9 00:40:31.149 lat (msec) : 2=0.51%, 4=84.32%, 10=15.17% 00:40:31.149 cpu : usr=95.78%, sys=3.82%, ctx=7, majf=0, minf=1634 00:40:31.149 IO depths : 1=0.4%, 2=3.7%, 4=67.4%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:31.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.149 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.149 issued rwts: total=11157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:31.149 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:31.149 00:40:31.149 Run status group 0 (all jobs): 00:40:31.149 READ: bw=70.1MiB/s (73.5MB/s), 17.1MiB/s-18.2MiB/s (17.9MB/s-19.1MB/s), io=350MiB (367MB), run=5001-5002msec 00:40:31.715 ----------------------------------------------------- 00:40:31.715 Suppressions used: 00:40:31.715 count bytes template 00:40:31.715 6 52 /usr/src/fio/parse.c 00:40:31.715 1 8 libtcmalloc_minimal.so 00:40:31.715 1 904 libcrypto.so 00:40:31.715 ----------------------------------------------------- 00:40:31.715 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.974 00:40:31.974 real 0m28.392s 00:40:31.974 user 4m56.181s 00:40:31.974 sys 0m4.927s 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:31.974 23:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.974 ************************************ 00:40:31.974 END TEST fio_dif_rand_params 00:40:31.974 ************************************ 00:40:31.974 23:44:40 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:40:31.974 23:44:40 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:40:31.974 23:44:40 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:31.974 23:44:40 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:31.974 23:44:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:31.974 ************************************ 00:40:31.974 START TEST fio_dif_digest 00:40:31.974 ************************************ 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:31.974 bdev_null0 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:31.974 [2024-07-10 23:44:40.936885] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:31.974 { 00:40:31.974 "params": { 00:40:31.974 "name": "Nvme$subsystem", 00:40:31.974 "trtype": "$TEST_TRANSPORT", 00:40:31.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:31.974 "adrfam": "ipv4", 00:40:31.974 "trsvcid": "$NVMF_PORT", 00:40:31.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:31.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:31.974 "hdgst": ${hdgst:-false}, 00:40:31.974 "ddgst": ${ddgst:-false} 00:40:31.974 }, 00:40:31.974 "method": "bdev_nvme_attach_controller" 00:40:31.974 } 00:40:31.974 EOF 00:40:31.974 )") 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:40:31.974 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:31.975 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:31.975 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:40:31.975 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:31.975 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:31.975 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:40:31.975 23:44:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:40:31.975 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:31.975 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:40:31.975 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:31.975 23:44:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:40:31.975 23:44:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:40:31.975 23:44:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:31.975 "params": { 00:40:31.975 "name": "Nvme0", 00:40:31.975 "trtype": "tcp", 00:40:31.975 "traddr": "10.0.0.2", 00:40:31.975 "adrfam": "ipv4", 00:40:31.975 "trsvcid": "4420", 00:40:31.975 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:31.975 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:31.975 "hdgst": true, 00:40:31.975 "ddgst": true 00:40:31.975 }, 00:40:31.975 "method": "bdev_nvme_attach_controller" 00:40:31.975 }' 00:40:31.975 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:31.975 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:31.975 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:40:31.975 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:31.975 23:44:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:32.539 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:32.539 ... 00:40:32.539 fio-3.35 00:40:32.539 Starting 3 threads 00:40:32.539 EAL: No free 2048 kB hugepages reported on node 1 00:40:44.731 00:40:44.731 filename0: (groupid=0, jobs=1): err= 0: pid=2698256: Wed Jul 10 23:44:52 2024 00:40:44.731 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(304MiB/10049msec) 00:40:44.731 slat (nsec): min=7439, max=28663, avg=13556.25, stdev=1899.01 00:40:44.731 clat (usec): min=7001, max=52182, avg=12371.86, stdev=1489.39 00:40:44.731 lat (usec): min=7014, max=52194, avg=12385.42, stdev=1489.44 00:40:44.731 clat percentiles (usec): 00:40:44.731 | 1.00th=[ 8979], 5.00th=[10814], 10.00th=[11207], 20.00th=[11600], 00:40:44.731 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:40:44.731 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[13829], 00:40:44.731 | 99.00th=[14484], 99.50th=[14615], 99.90th=[17957], 99.95th=[50594], 00:40:44.731 | 99.99th=[52167] 00:40:44.731 bw ( KiB/s): min=29440, max=32000, per=34.70%, avg=31078.40, stdev=634.72, samples=20 00:40:44.731 iops : min= 230, max= 250, avg=242.80, stdev= 4.96, samples=20 00:40:44.731 lat (msec) : 10=2.02%, 20=97.90%, 100=0.08% 00:40:44.731 cpu : usr=94.73%, sys=4.92%, ctx=29, majf=0, minf=1634 00:40:44.731 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:44.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.731 issued rwts: total=2430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:44.731 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:44.731 filename0: (groupid=0, jobs=1): err= 0: pid=2698257: Wed Jul 10 23:44:52 2024 00:40:44.731 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(287MiB/10046msec) 00:40:44.731 slat (nsec): min=7733, max=32266, avg=13662.27, stdev=2057.50 00:40:44.731 clat (usec): min=6941, max=45809, avg=13059.46, stdev=1265.48 00:40:44.731 lat (usec): min=6953, max=45822, avg=13073.12, stdev=1265.57 00:40:44.731 clat percentiles (usec): 00:40:44.731 | 1.00th=[ 9110], 5.00th=[11338], 10.00th=[11863], 20.00th=[12256], 00:40:44.731 | 30.00th=[12649], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:40:44.731 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14353], 95.00th=[14746], 00:40:44.731 | 99.00th=[15270], 99.50th=[15664], 99.90th=[16188], 99.95th=[16319], 00:40:44.731 | 99.99th=[45876] 00:40:44.731 bw ( KiB/s): min=28672, max=30720, per=32.83%, avg=29404.45, stdev=591.19, samples=20 00:40:44.731 iops : min= 224, max= 240, avg=229.70, stdev= 4.65, samples=20 00:40:44.731 lat (msec) : 10=1.52%, 20=98.43%, 50=0.04% 00:40:44.731 cpu : usr=94.68%, sys=4.96%, ctx=23, majf=0, minf=1634 00:40:44.731 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:44.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.731 issued rwts: total=2298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:44.731 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:44.731 filename0: (groupid=0, jobs=1): err= 0: pid=2698258: Wed Jul 10 23:44:52 2024 00:40:44.731 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(288MiB/10046msec) 00:40:44.731 slat (nsec): min=7760, max=37566, avg=14061.73, stdev=2011.42 00:40:44.731 clat (usec): min=9748, max=55940, avg=13044.20, stdev=2933.13 00:40:44.731 lat (usec): min=9761, max=55952, avg=13058.27, stdev=2933.10 00:40:44.731 clat percentiles (usec): 00:40:44.731 | 1.00th=[10814], 5.00th=[11338], 10.00th=[11731], 20.00th=[12125], 00:40:44.731 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12780], 60.00th=[13042], 00:40:44.731 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14091], 95.00th=[14484], 00:40:44.731 | 99.00th=[15270], 99.50th=[15926], 99.90th=[55313], 99.95th=[55837], 00:40:44.731 | 99.99th=[55837] 00:40:44.731 bw ( KiB/s): min=27136, max=30464, per=32.90%, avg=29465.60, stdev=989.39, samples=20 00:40:44.731 iops : min= 212, max= 238, avg=230.20, stdev= 7.73, samples=20 00:40:44.731 lat (msec) : 10=0.13%, 20=99.39%, 50=0.04%, 100=0.43% 00:40:44.731 cpu : usr=94.94%, sys=4.70%, ctx=23, majf=0, minf=1635 00:40:44.731 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:44.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.731 issued rwts: total=2304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:44.731 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:44.731 00:40:44.731 Run status group 0 (all jobs): 00:40:44.731 READ: bw=87.5MiB/s (91.7MB/s), 28.6MiB/s-30.2MiB/s (30.0MB/s-31.7MB/s), io=879MiB (922MB), run=10046-10049msec 00:40:44.731 ----------------------------------------------------- 00:40:44.731 Suppressions used: 00:40:44.731 count bytes template 00:40:44.731 5 44 /usr/src/fio/parse.c 00:40:44.731 1 8 libtcmalloc_minimal.so 00:40:44.731 1 904 libcrypto.so 00:40:44.731 ----------------------------------------------------- 00:40:44.731 00:40:44.731 23:44:53 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:40:44.731 23:44:53 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:40:44.731 23:44:53 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:40:44.731 23:44:53 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:44.731 23:44:53 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:40:44.731 23:44:53 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:44.731 23:44:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:44.731 23:44:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:44.731 23:44:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:44.731 23:44:53 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:44.731 23:44:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:44.731 23:44:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:44.731 23:44:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:44.731 00:40:44.731 real 0m12.358s 00:40:44.731 user 0m36.527s 00:40:44.731 sys 0m1.927s 00:40:44.731 23:44:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:44.731 23:44:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:44.731 ************************************ 00:40:44.731 END TEST fio_dif_digest 00:40:44.731 ************************************ 00:40:44.731 23:44:53 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:40:44.731 23:44:53 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:40:44.731 23:44:53 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:40:44.731 23:44:53 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:44.731 23:44:53 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:40:44.731 23:44:53 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:44.731 23:44:53 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:40:44.731 23:44:53 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:44.731 23:44:53 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:44.731 rmmod nvme_tcp 00:40:44.731 rmmod nvme_fabrics 00:40:44.731 rmmod nvme_keyring 00:40:44.731 23:44:53 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:44.731 23:44:53 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:40:44.731 23:44:53 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:40:44.731 23:44:53 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2688781 ']' 00:40:44.731 23:44:53 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2688781 00:40:44.731 23:44:53 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2688781 ']' 00:40:44.731 23:44:53 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2688781 00:40:44.731 23:44:53 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:40:44.731 23:44:53 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:44.731 23:44:53 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2688781 00:40:44.731 23:44:53 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:44.731 23:44:53 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:44.731 23:44:53 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2688781' 00:40:44.731 killing process with pid 2688781 00:40:44.731 23:44:53 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2688781 00:40:44.731 23:44:53 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2688781 00:40:45.661 23:44:54 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:40:45.661 23:44:54 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:48.192 Waiting for block devices as requested 00:40:48.192 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:40:48.192 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:48.192 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:48.451 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:48.451 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:48.451 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:48.451 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:48.709 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:48.709 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:48.709 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:48.709 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:48.968 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:48.968 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:48.968 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:48.968 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:48.968 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:49.225 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:49.225 23:44:58 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:49.225 23:44:58 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:49.225 23:44:58 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:49.225 23:44:58 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:49.225 23:44:58 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:49.225 23:44:58 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:49.225 23:44:58 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:51.791 23:45:00 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:51.791 00:40:51.791 real 1m21.682s 00:40:51.791 user 7m25.824s 00:40:51.791 sys 0m18.844s 00:40:51.791 23:45:00 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:51.791 23:45:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:51.791 ************************************ 00:40:51.791 END TEST nvmf_dif 00:40:51.791 ************************************ 00:40:51.791 23:45:00 -- common/autotest_common.sh@1142 -- # return 0 00:40:51.791 23:45:00 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:51.791 23:45:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:51.791 23:45:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:51.791 23:45:00 -- common/autotest_common.sh@10 -- # set +x 00:40:51.791 ************************************ 00:40:51.791 START TEST nvmf_abort_qd_sizes 00:40:51.791 ************************************ 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:51.791 * Looking for test storage... 00:40:51.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:40:51.791 23:45:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:40:57.069 Found 0000:86:00.0 (0x8086 - 0x159b) 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:40:57.069 Found 0000:86:00.1 (0x8086 - 0x159b) 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:40:57.069 Found net devices under 0000:86:00.0: cvl_0_0 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:57.069 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:40:57.070 Found net devices under 0000:86:00.1: cvl_0_1 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:57.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:57.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:40:57.070 00:40:57.070 --- 10.0.0.2 ping statistics --- 00:40:57.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:57.070 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:57.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:57.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:40:57.070 00:40:57.070 --- 10.0.0.1 ping statistics --- 00:40:57.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:57.070 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:40:57.070 23:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:58.973 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:40:58.973 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:40:58.973 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:40:58.973 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:40:58.973 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:40:59.231 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:40:59.231 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:40:59.231 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:40:59.231 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:40:59.231 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:40:59.231 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:40:59.231 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:40:59.231 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:40:59.231 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:40:59.231 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:40:59.231 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:41:00.168 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2706641 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2706641 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2706641 ']' 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:00.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:00.168 23:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:00.168 [2024-07-10 23:45:09.195558] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:41:00.168 [2024-07-10 23:45:09.195647] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:00.427 EAL: No free 2048 kB hugepages reported on node 1 00:41:00.427 [2024-07-10 23:45:09.305310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:00.684 [2024-07-10 23:45:09.526990] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:00.684 [2024-07-10 23:45:09.527034] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:00.684 [2024-07-10 23:45:09.527046] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:00.684 [2024-07-10 23:45:09.527054] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:00.684 [2024-07-10 23:45:09.527064] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:00.684 [2024-07-10 23:45:09.527134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:00.684 [2024-07-10 23:45:09.527218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:41:00.684 [2024-07-10 23:45:09.527242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:00.684 [2024-07-10 23:45:09.527251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:41:00.941 23:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:00.941 23:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:41:00.941 23:45:09 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:00.941 23:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:00.941 23:45:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:01.199 23:45:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:01.199 ************************************ 00:41:01.199 START TEST spdk_target_abort 00:41:01.199 ************************************ 00:41:01.199 23:45:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:41:01.199 23:45:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:01.199 23:45:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:41:01.199 23:45:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:01.199 23:45:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:04.477 spdk_targetn1 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:04.477 [2024-07-10 23:45:12.939402] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:04.477 [2024-07-10 23:45:12.990895] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:04.477 23:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:04.477 EAL: No free 2048 kB hugepages reported on node 1 00:41:07.759 Initializing NVMe Controllers 00:41:07.759 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:07.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:07.759 Initialization complete. Launching workers. 00:41:07.759 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13541, failed: 0 00:41:07.759 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1311, failed to submit 12230 00:41:07.759 success 764, unsuccess 547, failed 0 00:41:07.759 23:45:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:07.759 23:45:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:07.759 EAL: No free 2048 kB hugepages reported on node 1 00:41:11.042 Initializing NVMe Controllers 00:41:11.042 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:11.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:11.042 Initialization complete. Launching workers. 00:41:11.042 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8587, failed: 0 00:41:11.042 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1221, failed to submit 7366 00:41:11.042 success 322, unsuccess 899, failed 0 00:41:11.042 23:45:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:11.042 23:45:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:11.042 EAL: No free 2048 kB hugepages reported on node 1 00:41:14.323 Initializing NVMe Controllers 00:41:14.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:14.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:14.323 Initialization complete. Launching workers. 00:41:14.323 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33465, failed: 0 00:41:14.323 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2734, failed to submit 30731 00:41:14.323 success 551, unsuccess 2183, failed 0 00:41:14.323 23:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:41:14.323 23:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:14.323 23:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:14.323 23:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:14.323 23:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:41:14.323 23:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:14.323 23:45:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:15.256 23:45:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:15.256 23:45:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2706641 00:41:15.256 23:45:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2706641 ']' 00:41:15.256 23:45:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2706641 00:41:15.256 23:45:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:41:15.256 23:45:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:15.256 23:45:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2706641 00:41:15.256 23:45:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:15.257 23:45:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:15.257 23:45:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2706641' 00:41:15.257 killing process with pid 2706641 00:41:15.257 23:45:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2706641 00:41:15.257 23:45:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2706641 00:41:16.633 00:41:16.633 real 0m15.259s 00:41:16.633 user 0m59.164s 00:41:16.633 sys 0m2.275s 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:16.633 ************************************ 00:41:16.633 END TEST spdk_target_abort 00:41:16.633 ************************************ 00:41:16.633 23:45:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:41:16.633 23:45:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:41:16.633 23:45:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:16.633 23:45:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:16.633 23:45:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:16.633 ************************************ 00:41:16.633 START TEST kernel_target_abort 00:41:16.633 ************************************ 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:16.633 23:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:19.203 Waiting for block devices as requested 00:41:19.203 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:41:19.203 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:19.203 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:19.203 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:19.203 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:19.203 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:19.463 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:19.463 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:19.463 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:19.463 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:19.722 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:19.722 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:19.722 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:19.722 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:19.980 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:19.980 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:19.980 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:20.918 No valid GPT data, bailing 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:41:20.918 00:41:20.918 Discovery Log Number of Records 2, Generation counter 2 00:41:20.918 =====Discovery Log Entry 0====== 00:41:20.918 trtype: tcp 00:41:20.918 adrfam: ipv4 00:41:20.918 subtype: current discovery subsystem 00:41:20.918 treq: not specified, sq flow control disable supported 00:41:20.918 portid: 1 00:41:20.918 trsvcid: 4420 00:41:20.918 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:20.918 traddr: 10.0.0.1 00:41:20.918 eflags: none 00:41:20.918 sectype: none 00:41:20.918 =====Discovery Log Entry 1====== 00:41:20.918 trtype: tcp 00:41:20.918 adrfam: ipv4 00:41:20.918 subtype: nvme subsystem 00:41:20.918 treq: not specified, sq flow control disable supported 00:41:20.918 portid: 1 00:41:20.918 trsvcid: 4420 00:41:20.918 subnqn: nqn.2016-06.io.spdk:testnqn 00:41:20.918 traddr: 10.0.0.1 00:41:20.918 eflags: none 00:41:20.918 sectype: none 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:20.918 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:20.919 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:20.919 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:20.919 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:20.919 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:20.919 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:41:20.919 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:20.919 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:41:20.919 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:20.919 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:20.919 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:20.919 23:45:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:20.919 EAL: No free 2048 kB hugepages reported on node 1 00:41:24.202 Initializing NVMe Controllers 00:41:24.202 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:24.202 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:24.202 Initialization complete. Launching workers. 00:41:24.202 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71540, failed: 0 00:41:24.202 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 71540, failed to submit 0 00:41:24.202 success 0, unsuccess 71540, failed 0 00:41:24.202 23:45:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:24.202 23:45:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:24.202 EAL: No free 2048 kB hugepages reported on node 1 00:41:27.485 Initializing NVMe Controllers 00:41:27.485 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:27.485 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:27.485 Initialization complete. Launching workers. 00:41:27.485 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 116867, failed: 0 00:41:27.485 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29374, failed to submit 87493 00:41:27.485 success 0, unsuccess 29374, failed 0 00:41:27.485 23:45:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:27.486 23:45:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:27.486 EAL: No free 2048 kB hugepages reported on node 1 00:41:30.769 Initializing NVMe Controllers 00:41:30.769 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:30.769 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:30.769 Initialization complete. Launching workers. 00:41:30.769 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 111926, failed: 0 00:41:30.769 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27962, failed to submit 83964 00:41:30.769 success 0, unsuccess 27962, failed 0 00:41:30.769 23:45:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:41:30.769 23:45:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:41:30.769 23:45:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:41:30.769 23:45:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:30.769 23:45:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:30.769 23:45:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:41:30.769 23:45:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:30.769 23:45:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:41:30.769 23:45:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:41:30.769 23:45:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:32.672 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:41:32.672 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:41:32.672 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:41:32.672 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:41:32.672 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:41:32.672 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:41:32.672 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:41:32.672 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:41:32.672 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:41:32.672 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:41:32.672 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:41:32.672 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:41:32.672 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:41:32.672 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:41:32.672 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:41:32.672 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:41:33.239 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:41:33.500 00:41:33.500 real 0m17.012s 00:41:33.500 user 0m8.323s 00:41:33.500 sys 0m5.073s 00:41:33.500 23:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:33.500 23:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:33.500 ************************************ 00:41:33.500 END TEST kernel_target_abort 00:41:33.500 ************************************ 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:33.500 rmmod nvme_tcp 00:41:33.500 rmmod nvme_fabrics 00:41:33.500 rmmod nvme_keyring 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2706641 ']' 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2706641 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2706641 ']' 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2706641 00:41:33.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2706641) - No such process 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2706641 is not found' 00:41:33.500 Process with pid 2706641 is not found 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:41:33.500 23:45:42 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:36.032 Waiting for block devices as requested 00:41:36.032 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:41:36.032 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:36.032 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:36.032 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:36.290 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:36.290 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:36.290 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:36.290 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:36.548 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:36.548 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:36.548 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:36.548 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:36.806 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:36.806 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:36.806 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:36.806 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:37.063 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:37.063 23:45:46 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:37.063 23:45:46 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:37.063 23:45:46 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:37.063 23:45:46 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:37.063 23:45:46 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:37.063 23:45:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:37.063 23:45:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:39.645 23:45:48 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:39.645 00:41:39.645 real 0m47.793s 00:41:39.645 user 1m11.216s 00:41:39.645 sys 0m15.020s 00:41:39.645 23:45:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:39.645 23:45:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:39.645 ************************************ 00:41:39.645 END TEST nvmf_abort_qd_sizes 00:41:39.645 ************************************ 00:41:39.645 23:45:48 -- common/autotest_common.sh@1142 -- # return 0 00:41:39.645 23:45:48 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:39.645 23:45:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:39.645 23:45:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:39.645 23:45:48 -- common/autotest_common.sh@10 -- # set +x 00:41:39.645 ************************************ 00:41:39.645 START TEST keyring_file 00:41:39.645 ************************************ 00:41:39.645 23:45:48 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:39.645 * Looking for test storage... 00:41:39.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:41:39.645 23:45:48 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:41:39.645 23:45:48 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:39.645 23:45:48 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:39.645 23:45:48 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:39.645 23:45:48 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:39.645 23:45:48 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:39.645 23:45:48 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:39.645 23:45:48 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:39.645 23:45:48 keyring_file -- paths/export.sh@5 -- # export PATH 00:41:39.645 23:45:48 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@47 -- # : 0 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:39.645 23:45:48 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:39.645 23:45:48 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:39.645 23:45:48 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:39.645 23:45:48 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:39.646 23:45:48 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:41:39.646 23:45:48 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:41:39.646 23:45:48 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:41:39.646 23:45:48 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OE9pwCQCGk 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:39.646 23:45:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:39.646 23:45:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:41:39.646 23:45:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:39.646 23:45:48 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:41:39.646 23:45:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:41:39.646 23:45:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OE9pwCQCGk 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OE9pwCQCGk 00:41:39.646 23:45:48 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.OE9pwCQCGk 00:41:39.646 23:45:48 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@17 -- # name=key1 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.S6MPg9wynV 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:39.646 23:45:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:39.646 23:45:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:41:39.646 23:45:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:39.646 23:45:48 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:41:39.646 23:45:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:41:39.646 23:45:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.S6MPg9wynV 00:41:39.646 23:45:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.S6MPg9wynV 00:41:39.646 23:45:48 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.S6MPg9wynV 00:41:39.646 23:45:48 keyring_file -- keyring/file.sh@30 -- # tgtpid=2715754 00:41:39.646 23:45:48 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:41:39.646 23:45:48 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2715754 00:41:39.646 23:45:48 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2715754 ']' 00:41:39.646 23:45:48 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:39.646 23:45:48 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:39.646 23:45:48 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:39.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:39.646 23:45:48 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:39.646 23:45:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:39.646 [2024-07-10 23:45:48.418777] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:41:39.646 [2024-07-10 23:45:48.418878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2715754 ] 00:41:39.646 EAL: No free 2048 kB hugepages reported on node 1 00:41:39.646 [2024-07-10 23:45:48.521409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:39.904 [2024-07-10 23:45:48.735552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:41:40.839 23:45:49 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:40.839 [2024-07-10 23:45:49.644390] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:40.839 null0 00:41:40.839 [2024-07-10 23:45:49.676415] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:40.839 [2024-07-10 23:45:49.676765] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:40.839 [2024-07-10 23:45:49.684461] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:40.839 23:45:49 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:40.839 [2024-07-10 23:45:49.696467] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:41:40.839 request: 00:41:40.839 { 00:41:40.839 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:41:40.839 "secure_channel": false, 00:41:40.839 "listen_address": { 00:41:40.839 "trtype": "tcp", 00:41:40.839 "traddr": "127.0.0.1", 00:41:40.839 "trsvcid": "4420" 00:41:40.839 }, 00:41:40.839 "method": "nvmf_subsystem_add_listener", 00:41:40.839 "req_id": 1 00:41:40.839 } 00:41:40.839 Got JSON-RPC error response 00:41:40.839 response: 00:41:40.839 { 00:41:40.839 "code": -32602, 00:41:40.839 "message": "Invalid parameters" 00:41:40.839 } 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:40.839 23:45:49 keyring_file -- keyring/file.sh@46 -- # bperfpid=2715987 00:41:40.839 23:45:49 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2715987 /var/tmp/bperf.sock 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2715987 ']' 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:40.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:40.839 23:45:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:40.839 23:45:49 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:41:40.839 [2024-07-10 23:45:49.773143] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:41:40.839 [2024-07-10 23:45:49.773240] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2715987 ] 00:41:40.839 EAL: No free 2048 kB hugepages reported on node 1 00:41:40.839 [2024-07-10 23:45:49.874331] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:41.097 [2024-07-10 23:45:50.099645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:41.664 23:45:50 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:41.664 23:45:50 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:41:41.664 23:45:50 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OE9pwCQCGk 00:41:41.664 23:45:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OE9pwCQCGk 00:41:41.664 23:45:50 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.S6MPg9wynV 00:41:41.664 23:45:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.S6MPg9wynV 00:41:41.922 23:45:50 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:41:41.922 23:45:50 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:41:41.922 23:45:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:41.922 23:45:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:41.922 23:45:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:42.180 23:45:51 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.OE9pwCQCGk == \/\t\m\p\/\t\m\p\.\O\E\9\p\w\C\Q\C\G\k ]] 00:41:42.180 23:45:51 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:41:42.180 23:45:51 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:41:42.180 23:45:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:42.180 23:45:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:42.180 23:45:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:42.180 23:45:51 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.S6MPg9wynV == \/\t\m\p\/\t\m\p\.\S\6\M\P\g\9\w\y\n\V ]] 00:41:42.180 23:45:51 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:41:42.180 23:45:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:42.180 23:45:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:42.180 23:45:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:42.180 23:45:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:42.180 23:45:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:42.437 23:45:51 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:41:42.437 23:45:51 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:41:42.437 23:45:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:42.437 23:45:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:42.437 23:45:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:42.437 23:45:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:42.437 23:45:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:42.697 23:45:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:41:42.697 23:45:51 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:42.697 23:45:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:42.697 [2024-07-10 23:45:51.750938] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:42.994 nvme0n1 00:41:42.994 23:45:51 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:41:42.994 23:45:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:42.994 23:45:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:42.994 23:45:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:42.994 23:45:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:42.994 23:45:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:42.994 23:45:52 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:41:42.994 23:45:52 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:41:42.994 23:45:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:42.994 23:45:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:42.994 23:45:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:42.994 23:45:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:42.994 23:45:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:43.253 23:45:52 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:41:43.253 23:45:52 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:43.253 Running I/O for 1 seconds... 00:41:44.631 00:41:44.631 Latency(us) 00:41:44.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:44.631 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:41:44.631 nvme0n1 : 1.01 12384.23 48.38 0.00 0.00 10303.08 5784.26 18578.03 00:41:44.631 =================================================================================================================== 00:41:44.631 Total : 12384.23 48.38 0.00 0.00 10303.08 5784.26 18578.03 00:41:44.631 0 00:41:44.631 23:45:53 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:44.631 23:45:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:44.631 23:45:53 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:41:44.631 23:45:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:44.632 23:45:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:44.632 23:45:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:44.632 23:45:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:44.632 23:45:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:44.632 23:45:53 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:41:44.632 23:45:53 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:41:44.632 23:45:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:44.632 23:45:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:44.632 23:45:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:44.632 23:45:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:44.632 23:45:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:44.891 23:45:53 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:41:44.891 23:45:53 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:44.891 23:45:53 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:41:44.891 23:45:53 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:44.891 23:45:53 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:41:44.891 23:45:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:44.891 23:45:53 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:41:44.891 23:45:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:44.891 23:45:53 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:44.891 23:45:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:45.150 [2024-07-10 23:45:54.023711] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:41:45.150 [2024-07-10 23:45:54.023929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000332280 (107): Transport endpoint is not connected 00:41:45.150 [2024-07-10 23:45:54.024912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000332280 (9): Bad file descriptor 00:41:45.150 [2024-07-10 23:45:54.025910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:45.150 [2024-07-10 23:45:54.025929] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:41:45.150 [2024-07-10 23:45:54.025939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:45.150 request: 00:41:45.150 { 00:41:45.150 "name": "nvme0", 00:41:45.150 "trtype": "tcp", 00:41:45.150 "traddr": "127.0.0.1", 00:41:45.150 "adrfam": "ipv4", 00:41:45.150 "trsvcid": "4420", 00:41:45.150 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:45.150 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:45.150 "prchk_reftag": false, 00:41:45.150 "prchk_guard": false, 00:41:45.150 "hdgst": false, 00:41:45.150 "ddgst": false, 00:41:45.150 "psk": "key1", 00:41:45.150 "method": "bdev_nvme_attach_controller", 00:41:45.150 "req_id": 1 00:41:45.150 } 00:41:45.150 Got JSON-RPC error response 00:41:45.150 response: 00:41:45.150 { 00:41:45.150 "code": -5, 00:41:45.150 "message": "Input/output error" 00:41:45.150 } 00:41:45.150 23:45:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:41:45.150 23:45:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:45.150 23:45:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:45.150 23:45:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:45.150 23:45:54 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:41:45.150 23:45:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:45.150 23:45:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:45.150 23:45:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:45.150 23:45:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:45.150 23:45:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:45.409 23:45:54 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:41:45.409 23:45:54 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:41:45.409 23:45:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:45.409 23:45:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:45.409 23:45:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:45.409 23:45:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:45.409 23:45:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:45.409 23:45:54 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:41:45.409 23:45:54 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:41:45.409 23:45:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:45.668 23:45:54 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:41:45.668 23:45:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:41:45.927 23:45:54 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:41:45.927 23:45:54 keyring_file -- keyring/file.sh@77 -- # jq length 00:41:45.927 23:45:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:45.927 23:45:54 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:41:45.927 23:45:54 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.OE9pwCQCGk 00:41:45.927 23:45:54 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.OE9pwCQCGk 00:41:45.927 23:45:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:41:45.927 23:45:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.OE9pwCQCGk 00:41:45.927 23:45:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:41:45.927 23:45:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:45.927 23:45:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:41:45.927 23:45:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:45.927 23:45:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OE9pwCQCGk 00:41:45.927 23:45:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OE9pwCQCGk 00:41:46.186 [2024-07-10 23:45:55.070372] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.OE9pwCQCGk': 0100660 00:41:46.186 [2024-07-10 23:45:55.070406] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:41:46.186 request: 00:41:46.186 { 00:41:46.186 "name": "key0", 00:41:46.186 "path": "/tmp/tmp.OE9pwCQCGk", 00:41:46.186 "method": "keyring_file_add_key", 00:41:46.186 "req_id": 1 00:41:46.186 } 00:41:46.186 Got JSON-RPC error response 00:41:46.186 response: 00:41:46.186 { 00:41:46.186 "code": -1, 00:41:46.186 "message": "Operation not permitted" 00:41:46.186 } 00:41:46.186 23:45:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:41:46.186 23:45:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:46.186 23:45:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:46.186 23:45:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:46.186 23:45:55 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.OE9pwCQCGk 00:41:46.186 23:45:55 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OE9pwCQCGk 00:41:46.186 23:45:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OE9pwCQCGk 00:41:46.444 23:45:55 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.OE9pwCQCGk 00:41:46.445 23:45:55 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:41:46.445 23:45:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:46.445 23:45:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:46.445 23:45:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:46.445 23:45:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:46.445 23:45:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:46.445 23:45:55 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:41:46.445 23:45:55 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:46.445 23:45:55 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:41:46.445 23:45:55 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:46.445 23:45:55 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:41:46.445 23:45:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:46.445 23:45:55 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:41:46.445 23:45:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:46.445 23:45:55 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:46.445 23:45:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:46.703 [2024-07-10 23:45:55.599834] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.OE9pwCQCGk': No such file or directory 00:41:46.703 [2024-07-10 23:45:55.599868] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:41:46.703 [2024-07-10 23:45:55.599893] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:41:46.703 [2024-07-10 23:45:55.599903] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:41:46.703 [2024-07-10 23:45:55.599913] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:41:46.703 request: 00:41:46.703 { 00:41:46.703 "name": "nvme0", 00:41:46.703 "trtype": "tcp", 00:41:46.703 "traddr": "127.0.0.1", 00:41:46.703 "adrfam": "ipv4", 00:41:46.703 "trsvcid": "4420", 00:41:46.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:46.703 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:46.703 "prchk_reftag": false, 00:41:46.703 "prchk_guard": false, 00:41:46.703 "hdgst": false, 00:41:46.703 "ddgst": false, 00:41:46.703 "psk": "key0", 00:41:46.703 "method": "bdev_nvme_attach_controller", 00:41:46.703 "req_id": 1 00:41:46.703 } 00:41:46.703 Got JSON-RPC error response 00:41:46.703 response: 00:41:46.703 { 00:41:46.703 "code": -19, 00:41:46.703 "message": "No such device" 00:41:46.703 } 00:41:46.703 23:45:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:41:46.703 23:45:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:46.703 23:45:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:46.703 23:45:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:46.703 23:45:55 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:41:46.703 23:45:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:46.962 23:45:55 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:46.962 23:45:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:46.962 23:45:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:46.962 23:45:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:46.962 23:45:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:46.962 23:45:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:46.962 23:45:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XWR935z5rc 00:41:46.962 23:45:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:46.962 23:45:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:46.962 23:45:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:41:46.962 23:45:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:46.962 23:45:55 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:41:46.962 23:45:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:41:46.962 23:45:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:41:46.962 23:45:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XWR935z5rc 00:41:46.962 23:45:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XWR935z5rc 00:41:46.962 23:45:55 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.XWR935z5rc 00:41:46.962 23:45:55 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XWR935z5rc 00:41:46.962 23:45:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XWR935z5rc 00:41:46.962 23:45:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:46.962 23:45:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:47.221 nvme0n1 00:41:47.221 23:45:56 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:41:47.221 23:45:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:47.221 23:45:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:47.221 23:45:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:47.221 23:45:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:47.221 23:45:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:47.480 23:45:56 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:41:47.480 23:45:56 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:41:47.480 23:45:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:47.738 23:45:56 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:41:47.739 23:45:56 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:41:47.739 23:45:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:47.739 23:45:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:47.739 23:45:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:47.739 23:45:56 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:41:47.739 23:45:56 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:41:47.739 23:45:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:47.739 23:45:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:47.739 23:45:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:47.739 23:45:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:47.739 23:45:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:47.996 23:45:56 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:41:47.996 23:45:56 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:47.996 23:45:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:48.254 23:45:57 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:41:48.255 23:45:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:48.255 23:45:57 keyring_file -- keyring/file.sh@104 -- # jq length 00:41:48.255 23:45:57 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:41:48.255 23:45:57 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XWR935z5rc 00:41:48.255 23:45:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XWR935z5rc 00:41:48.513 23:45:57 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.S6MPg9wynV 00:41:48.513 23:45:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.S6MPg9wynV 00:41:48.771 23:45:57 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:48.771 23:45:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:49.029 nvme0n1 00:41:49.030 23:45:57 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:41:49.030 23:45:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:41:49.288 23:45:58 keyring_file -- keyring/file.sh@112 -- # config='{ 00:41:49.288 "subsystems": [ 00:41:49.288 { 00:41:49.288 "subsystem": "keyring", 00:41:49.288 "config": [ 00:41:49.288 { 00:41:49.288 "method": "keyring_file_add_key", 00:41:49.288 "params": { 00:41:49.288 "name": "key0", 00:41:49.288 "path": "/tmp/tmp.XWR935z5rc" 00:41:49.288 } 00:41:49.288 }, 00:41:49.288 { 00:41:49.288 "method": "keyring_file_add_key", 00:41:49.288 "params": { 00:41:49.288 "name": "key1", 00:41:49.288 "path": "/tmp/tmp.S6MPg9wynV" 00:41:49.288 } 00:41:49.288 } 00:41:49.288 ] 00:41:49.288 }, 00:41:49.288 { 00:41:49.288 "subsystem": "iobuf", 00:41:49.288 "config": [ 00:41:49.289 { 00:41:49.289 "method": "iobuf_set_options", 00:41:49.289 "params": { 00:41:49.289 "small_pool_count": 8192, 00:41:49.289 "large_pool_count": 1024, 00:41:49.289 "small_bufsize": 8192, 00:41:49.289 "large_bufsize": 135168 00:41:49.289 } 00:41:49.289 } 00:41:49.289 ] 00:41:49.289 }, 00:41:49.289 { 00:41:49.289 "subsystem": "sock", 00:41:49.289 "config": [ 00:41:49.289 { 00:41:49.289 "method": "sock_set_default_impl", 00:41:49.289 "params": { 00:41:49.289 "impl_name": "posix" 00:41:49.289 } 00:41:49.289 }, 00:41:49.289 { 00:41:49.289 "method": "sock_impl_set_options", 00:41:49.289 "params": { 00:41:49.289 "impl_name": "ssl", 00:41:49.289 "recv_buf_size": 4096, 00:41:49.289 "send_buf_size": 4096, 00:41:49.289 "enable_recv_pipe": true, 00:41:49.289 "enable_quickack": false, 00:41:49.289 "enable_placement_id": 0, 00:41:49.289 "enable_zerocopy_send_server": true, 00:41:49.289 "enable_zerocopy_send_client": false, 00:41:49.289 "zerocopy_threshold": 0, 00:41:49.289 "tls_version": 0, 00:41:49.289 "enable_ktls": false 00:41:49.289 } 00:41:49.289 }, 00:41:49.289 { 00:41:49.289 "method": "sock_impl_set_options", 00:41:49.289 "params": { 00:41:49.289 "impl_name": "posix", 00:41:49.289 "recv_buf_size": 2097152, 00:41:49.289 "send_buf_size": 2097152, 00:41:49.289 "enable_recv_pipe": true, 00:41:49.289 "enable_quickack": false, 00:41:49.289 "enable_placement_id": 0, 00:41:49.289 "enable_zerocopy_send_server": true, 00:41:49.289 "enable_zerocopy_send_client": false, 00:41:49.289 "zerocopy_threshold": 0, 00:41:49.289 "tls_version": 0, 00:41:49.289 "enable_ktls": false 00:41:49.289 } 00:41:49.289 } 00:41:49.289 ] 00:41:49.289 }, 00:41:49.289 { 00:41:49.289 "subsystem": "vmd", 00:41:49.289 "config": [] 00:41:49.289 }, 00:41:49.289 { 00:41:49.289 "subsystem": "accel", 00:41:49.289 "config": [ 00:41:49.289 { 00:41:49.289 "method": "accel_set_options", 00:41:49.289 "params": { 00:41:49.289 "small_cache_size": 128, 00:41:49.289 "large_cache_size": 16, 00:41:49.289 "task_count": 2048, 00:41:49.289 "sequence_count": 2048, 00:41:49.289 "buf_count": 2048 00:41:49.289 } 00:41:49.289 } 00:41:49.289 ] 00:41:49.289 }, 00:41:49.289 { 00:41:49.289 "subsystem": "bdev", 00:41:49.289 "config": [ 00:41:49.289 { 00:41:49.289 "method": "bdev_set_options", 00:41:49.289 "params": { 00:41:49.289 "bdev_io_pool_size": 65535, 00:41:49.289 "bdev_io_cache_size": 256, 00:41:49.289 "bdev_auto_examine": true, 00:41:49.289 "iobuf_small_cache_size": 128, 00:41:49.289 "iobuf_large_cache_size": 16 00:41:49.289 } 00:41:49.289 }, 00:41:49.289 { 00:41:49.289 "method": "bdev_raid_set_options", 00:41:49.289 "params": { 00:41:49.289 "process_window_size_kb": 1024 00:41:49.289 } 00:41:49.289 }, 00:41:49.289 { 00:41:49.289 "method": "bdev_iscsi_set_options", 00:41:49.289 "params": { 00:41:49.289 "timeout_sec": 30 00:41:49.289 } 00:41:49.289 }, 00:41:49.289 { 00:41:49.289 "method": "bdev_nvme_set_options", 00:41:49.289 "params": { 00:41:49.289 "action_on_timeout": "none", 00:41:49.289 "timeout_us": 0, 00:41:49.289 "timeout_admin_us": 0, 00:41:49.289 "keep_alive_timeout_ms": 10000, 00:41:49.289 "arbitration_burst": 0, 00:41:49.289 "low_priority_weight": 0, 00:41:49.289 "medium_priority_weight": 0, 00:41:49.289 "high_priority_weight": 0, 00:41:49.289 "nvme_adminq_poll_period_us": 10000, 00:41:49.289 "nvme_ioq_poll_period_us": 0, 00:41:49.289 "io_queue_requests": 512, 00:41:49.289 "delay_cmd_submit": true, 00:41:49.289 "transport_retry_count": 4, 00:41:49.289 "bdev_retry_count": 3, 00:41:49.289 "transport_ack_timeout": 0, 00:41:49.289 "ctrlr_loss_timeout_sec": 0, 00:41:49.289 "reconnect_delay_sec": 0, 00:41:49.289 "fast_io_fail_timeout_sec": 0, 00:41:49.289 "disable_auto_failback": false, 00:41:49.289 "generate_uuids": false, 00:41:49.289 "transport_tos": 0, 00:41:49.289 "nvme_error_stat": false, 00:41:49.289 "rdma_srq_size": 0, 00:41:49.289 "io_path_stat": false, 00:41:49.289 "allow_accel_sequence": false, 00:41:49.289 "rdma_max_cq_size": 0, 00:41:49.289 "rdma_cm_event_timeout_ms": 0, 00:41:49.289 "dhchap_digests": [ 00:41:49.289 "sha256", 00:41:49.289 "sha384", 00:41:49.289 "sha512" 00:41:49.289 ], 00:41:49.289 "dhchap_dhgroups": [ 00:41:49.289 "null", 00:41:49.289 "ffdhe2048", 00:41:49.289 "ffdhe3072", 00:41:49.289 "ffdhe4096", 00:41:49.289 "ffdhe6144", 00:41:49.289 "ffdhe8192" 00:41:49.289 ] 00:41:49.289 } 00:41:49.289 }, 00:41:49.289 { 00:41:49.289 "method": "bdev_nvme_attach_controller", 00:41:49.289 "params": { 00:41:49.289 "name": "nvme0", 00:41:49.289 "trtype": "TCP", 00:41:49.289 "adrfam": "IPv4", 00:41:49.289 "traddr": "127.0.0.1", 00:41:49.289 "trsvcid": "4420", 00:41:49.289 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:49.289 "prchk_reftag": false, 00:41:49.289 "prchk_guard": false, 00:41:49.289 "ctrlr_loss_timeout_sec": 0, 00:41:49.289 "reconnect_delay_sec": 0, 00:41:49.289 "fast_io_fail_timeout_sec": 0, 00:41:49.289 "psk": "key0", 00:41:49.289 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:49.289 "hdgst": false, 00:41:49.289 "ddgst": false 00:41:49.289 } 00:41:49.289 }, 00:41:49.289 { 00:41:49.289 "method": "bdev_nvme_set_hotplug", 00:41:49.289 "params": { 00:41:49.289 "period_us": 100000, 00:41:49.289 "enable": false 00:41:49.289 } 00:41:49.289 }, 00:41:49.289 { 00:41:49.289 "method": "bdev_wait_for_examine" 00:41:49.289 } 00:41:49.289 ] 00:41:49.289 }, 00:41:49.289 { 00:41:49.289 "subsystem": "nbd", 00:41:49.289 "config": [] 00:41:49.289 } 00:41:49.289 ] 00:41:49.289 }' 00:41:49.289 23:45:58 keyring_file -- keyring/file.sh@114 -- # killprocess 2715987 00:41:49.289 23:45:58 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2715987 ']' 00:41:49.289 23:45:58 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2715987 00:41:49.289 23:45:58 keyring_file -- common/autotest_common.sh@953 -- # uname 00:41:49.289 23:45:58 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:49.289 23:45:58 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2715987 00:41:49.289 23:45:58 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:41:49.289 23:45:58 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:41:49.289 23:45:58 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2715987' 00:41:49.289 killing process with pid 2715987 00:41:49.289 23:45:58 keyring_file -- common/autotest_common.sh@967 -- # kill 2715987 00:41:49.289 Received shutdown signal, test time was about 1.000000 seconds 00:41:49.289 00:41:49.289 Latency(us) 00:41:49.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:49.289 =================================================================================================================== 00:41:49.289 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:49.289 23:45:58 keyring_file -- common/autotest_common.sh@972 -- # wait 2715987 00:41:50.225 23:45:59 keyring_file -- keyring/file.sh@117 -- # bperfpid=2717665 00:41:50.225 23:45:59 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2717665 /var/tmp/bperf.sock 00:41:50.225 23:45:59 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2717665 ']' 00:41:50.225 23:45:59 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:50.225 23:45:59 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:50.225 23:45:59 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:50.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:50.225 23:45:59 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:50.225 23:45:59 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:41:50.225 23:45:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:50.225 23:45:59 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:41:50.225 "subsystems": [ 00:41:50.225 { 00:41:50.225 "subsystem": "keyring", 00:41:50.225 "config": [ 00:41:50.225 { 00:41:50.225 "method": "keyring_file_add_key", 00:41:50.225 "params": { 00:41:50.225 "name": "key0", 00:41:50.225 "path": "/tmp/tmp.XWR935z5rc" 00:41:50.225 } 00:41:50.225 }, 00:41:50.225 { 00:41:50.225 "method": "keyring_file_add_key", 00:41:50.225 "params": { 00:41:50.225 "name": "key1", 00:41:50.225 "path": "/tmp/tmp.S6MPg9wynV" 00:41:50.225 } 00:41:50.225 } 00:41:50.225 ] 00:41:50.225 }, 00:41:50.225 { 00:41:50.225 "subsystem": "iobuf", 00:41:50.225 "config": [ 00:41:50.225 { 00:41:50.225 "method": "iobuf_set_options", 00:41:50.225 "params": { 00:41:50.225 "small_pool_count": 8192, 00:41:50.225 "large_pool_count": 1024, 00:41:50.225 "small_bufsize": 8192, 00:41:50.225 "large_bufsize": 135168 00:41:50.225 } 00:41:50.225 } 00:41:50.225 ] 00:41:50.225 }, 00:41:50.225 { 00:41:50.225 "subsystem": "sock", 00:41:50.225 "config": [ 00:41:50.225 { 00:41:50.225 "method": "sock_set_default_impl", 00:41:50.225 "params": { 00:41:50.225 "impl_name": "posix" 00:41:50.225 } 00:41:50.225 }, 00:41:50.225 { 00:41:50.225 "method": "sock_impl_set_options", 00:41:50.225 "params": { 00:41:50.225 "impl_name": "ssl", 00:41:50.225 "recv_buf_size": 4096, 00:41:50.225 "send_buf_size": 4096, 00:41:50.225 "enable_recv_pipe": true, 00:41:50.225 "enable_quickack": false, 00:41:50.225 "enable_placement_id": 0, 00:41:50.225 "enable_zerocopy_send_server": true, 00:41:50.225 "enable_zerocopy_send_client": false, 00:41:50.225 "zerocopy_threshold": 0, 00:41:50.225 "tls_version": 0, 00:41:50.225 "enable_ktls": false 00:41:50.225 } 00:41:50.225 }, 00:41:50.225 { 00:41:50.225 "method": "sock_impl_set_options", 00:41:50.226 "params": { 00:41:50.226 "impl_name": "posix", 00:41:50.226 "recv_buf_size": 2097152, 00:41:50.226 "send_buf_size": 2097152, 00:41:50.226 "enable_recv_pipe": true, 00:41:50.226 "enable_quickack": false, 00:41:50.226 "enable_placement_id": 0, 00:41:50.226 "enable_zerocopy_send_server": true, 00:41:50.226 "enable_zerocopy_send_client": false, 00:41:50.226 "zerocopy_threshold": 0, 00:41:50.226 "tls_version": 0, 00:41:50.226 "enable_ktls": false 00:41:50.226 } 00:41:50.226 } 00:41:50.226 ] 00:41:50.226 }, 00:41:50.226 { 00:41:50.226 "subsystem": "vmd", 00:41:50.226 "config": [] 00:41:50.226 }, 00:41:50.226 { 00:41:50.226 "subsystem": "accel", 00:41:50.226 "config": [ 00:41:50.226 { 00:41:50.226 "method": "accel_set_options", 00:41:50.226 "params": { 00:41:50.226 "small_cache_size": 128, 00:41:50.226 "large_cache_size": 16, 00:41:50.226 "task_count": 2048, 00:41:50.226 "sequence_count": 2048, 00:41:50.226 "buf_count": 2048 00:41:50.226 } 00:41:50.226 } 00:41:50.226 ] 00:41:50.226 }, 00:41:50.226 { 00:41:50.226 "subsystem": "bdev", 00:41:50.226 "config": [ 00:41:50.226 { 00:41:50.226 "method": "bdev_set_options", 00:41:50.226 "params": { 00:41:50.226 "bdev_io_pool_size": 65535, 00:41:50.226 "bdev_io_cache_size": 256, 00:41:50.226 "bdev_auto_examine": true, 00:41:50.226 "iobuf_small_cache_size": 128, 00:41:50.226 "iobuf_large_cache_size": 16 00:41:50.226 } 00:41:50.226 }, 00:41:50.226 { 00:41:50.226 "method": "bdev_raid_set_options", 00:41:50.226 "params": { 00:41:50.226 "process_window_size_kb": 1024 00:41:50.226 } 00:41:50.226 }, 00:41:50.226 { 00:41:50.226 "method": "bdev_iscsi_set_options", 00:41:50.226 "params": { 00:41:50.226 "timeout_sec": 30 00:41:50.226 } 00:41:50.226 }, 00:41:50.226 { 00:41:50.226 "method": "bdev_nvme_set_options", 00:41:50.226 "params": { 00:41:50.226 "action_on_timeout": "none", 00:41:50.226 "timeout_us": 0, 00:41:50.226 "timeout_admin_us": 0, 00:41:50.226 "keep_alive_timeout_ms": 10000, 00:41:50.226 "arbitration_burst": 0, 00:41:50.226 "low_priority_weight": 0, 00:41:50.226 "medium_priority_weight": 0, 00:41:50.226 "high_priority_weight": 0, 00:41:50.226 "nvme_adminq_poll_period_us": 10000, 00:41:50.226 "nvme_ioq_poll_period_us": 0, 00:41:50.226 "io_queue_requests": 512, 00:41:50.226 "delay_cmd_submit": true, 00:41:50.226 "transport_retry_count": 4, 00:41:50.226 "bdev_retry_count": 3, 00:41:50.226 "transport_ack_timeout": 0, 00:41:50.226 "ctrlr_loss_timeout_sec": 0, 00:41:50.226 "reconnect_delay_sec": 0, 00:41:50.226 "fast_io_fail_timeout_sec": 0, 00:41:50.226 "disable_auto_failback": false, 00:41:50.226 "generate_uuids": false, 00:41:50.226 "transport_tos": 0, 00:41:50.226 "nvme_error_stat": false, 00:41:50.226 "rdma_srq_size": 0, 00:41:50.226 "io_path_stat": false, 00:41:50.226 "allow_accel_sequence": false, 00:41:50.226 "rdma_max_cq_size": 0, 00:41:50.226 "rdma_cm_event_timeout_ms": 0, 00:41:50.226 "dhchap_digests": [ 00:41:50.226 "sha256", 00:41:50.226 "sha384", 00:41:50.226 "sha512" 00:41:50.226 ], 00:41:50.226 "dhchap_dhgroups": [ 00:41:50.226 "null", 00:41:50.226 "ffdhe2048", 00:41:50.226 "ffdhe3072", 00:41:50.226 "ffdhe4096", 00:41:50.226 "ffdhe6144", 00:41:50.226 "ffdhe8192" 00:41:50.226 ] 00:41:50.226 } 00:41:50.226 }, 00:41:50.226 { 00:41:50.226 "method": "bdev_nvme_attach_controller", 00:41:50.226 "params": { 00:41:50.226 "name": "nvme0", 00:41:50.226 "trtype": "TCP", 00:41:50.226 "adrfam": "IPv4", 00:41:50.226 "traddr": "127.0.0.1", 00:41:50.226 "trsvcid": "4420", 00:41:50.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:50.226 "prchk_reftag": false, 00:41:50.226 "prchk_guard": false, 00:41:50.226 "ctrlr_loss_timeout_sec": 0, 00:41:50.226 "reconnect_delay_sec": 0, 00:41:50.226 "fast_io_fail_timeout_sec": 0, 00:41:50.226 "psk": "key0", 00:41:50.226 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:50.226 "hdgst": false, 00:41:50.226 "ddgst": false 00:41:50.226 } 00:41:50.226 }, 00:41:50.226 { 00:41:50.226 "method": "bdev_nvme_set_hotplug", 00:41:50.226 "params": { 00:41:50.226 "period_us": 100000, 00:41:50.226 "enable": false 00:41:50.226 } 00:41:50.226 }, 00:41:50.226 { 00:41:50.226 "method": "bdev_wait_for_examine" 00:41:50.226 } 00:41:50.226 ] 00:41:50.226 }, 00:41:50.226 { 00:41:50.226 "subsystem": "nbd", 00:41:50.226 "config": [] 00:41:50.226 } 00:41:50.226 ] 00:41:50.226 }' 00:41:50.226 [2024-07-10 23:45:59.272404] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:41:50.226 [2024-07-10 23:45:59.272496] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2717665 ] 00:41:50.484 EAL: No free 2048 kB hugepages reported on node 1 00:41:50.484 [2024-07-10 23:45:59.372204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:50.743 [2024-07-10 23:45:59.596206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:51.002 [2024-07-10 23:46:00.054308] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:51.260 23:46:00 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:51.260 23:46:00 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:41:51.260 23:46:00 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:41:51.260 23:46:00 keyring_file -- keyring/file.sh@120 -- # jq length 00:41:51.260 23:46:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:51.519 23:46:00 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:41:51.519 23:46:00 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:41:51.519 23:46:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:51.519 23:46:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:51.519 23:46:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:51.519 23:46:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:51.519 23:46:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:51.519 23:46:00 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:41:51.519 23:46:00 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:41:51.519 23:46:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:51.519 23:46:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:51.519 23:46:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:51.519 23:46:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:51.519 23:46:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:51.779 23:46:00 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:41:51.779 23:46:00 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:41:51.779 23:46:00 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:41:51.779 23:46:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:41:52.038 23:46:00 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:41:52.038 23:46:00 keyring_file -- keyring/file.sh@1 -- # cleanup 00:41:52.038 23:46:00 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.XWR935z5rc /tmp/tmp.S6MPg9wynV 00:41:52.038 23:46:00 keyring_file -- keyring/file.sh@20 -- # killprocess 2717665 00:41:52.038 23:46:00 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2717665 ']' 00:41:52.038 23:46:00 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2717665 00:41:52.038 23:46:00 keyring_file -- common/autotest_common.sh@953 -- # uname 00:41:52.038 23:46:00 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:52.038 23:46:00 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2717665 00:41:52.038 23:46:00 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:41:52.038 23:46:00 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:41:52.038 23:46:00 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2717665' 00:41:52.038 killing process with pid 2717665 00:41:52.038 23:46:00 keyring_file -- common/autotest_common.sh@967 -- # kill 2717665 00:41:52.038 Received shutdown signal, test time was about 1.000000 seconds 00:41:52.038 00:41:52.038 Latency(us) 00:41:52.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:52.038 =================================================================================================================== 00:41:52.038 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:52.038 23:46:00 keyring_file -- common/autotest_common.sh@972 -- # wait 2717665 00:41:52.976 23:46:01 keyring_file -- keyring/file.sh@21 -- # killprocess 2715754 00:41:52.976 23:46:01 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2715754 ']' 00:41:52.976 23:46:01 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2715754 00:41:52.976 23:46:01 keyring_file -- common/autotest_common.sh@953 -- # uname 00:41:52.976 23:46:01 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:52.976 23:46:01 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2715754 00:41:52.976 23:46:02 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:52.976 23:46:02 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:52.976 23:46:02 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2715754' 00:41:52.976 killing process with pid 2715754 00:41:52.976 23:46:02 keyring_file -- common/autotest_common.sh@967 -- # kill 2715754 00:41:52.976 [2024-07-10 23:46:02.003064] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:41:52.976 23:46:02 keyring_file -- common/autotest_common.sh@972 -- # wait 2715754 00:41:55.512 00:41:55.512 real 0m16.244s 00:41:55.512 user 0m34.218s 00:41:55.512 sys 0m2.926s 00:41:55.512 23:46:04 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:55.512 23:46:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:55.512 ************************************ 00:41:55.512 END TEST keyring_file 00:41:55.512 ************************************ 00:41:55.512 23:46:04 -- common/autotest_common.sh@1142 -- # return 0 00:41:55.512 23:46:04 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:41:55.512 23:46:04 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:41:55.512 23:46:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:55.512 23:46:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:55.512 23:46:04 -- common/autotest_common.sh@10 -- # set +x 00:41:55.512 ************************************ 00:41:55.512 START TEST keyring_linux 00:41:55.512 ************************************ 00:41:55.512 23:46:04 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:41:55.512 * Looking for test storage... 00:41:55.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:41:55.512 23:46:04 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:41:55.512 23:46:04 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:55.512 23:46:04 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:55.512 23:46:04 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:55.512 23:46:04 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:55.512 23:46:04 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.512 23:46:04 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.512 23:46:04 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.512 23:46:04 keyring_linux -- paths/export.sh@5 -- # export PATH 00:41:55.512 23:46:04 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:55.512 23:46:04 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:55.772 23:46:04 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:55.772 23:46:04 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:55.772 23:46:04 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:55.772 23:46:04 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:41:55.772 23:46:04 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:41:55.772 23:46:04 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:41:55.772 23:46:04 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:41:55.772 23:46:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:41:55.772 23:46:04 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:41:55.772 23:46:04 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:55.772 23:46:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:41:55.772 23:46:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:41:55.772 23:46:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:55.772 23:46:04 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:55.772 23:46:04 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:41:55.772 23:46:04 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:55.772 23:46:04 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:41:55.772 23:46:04 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:41:55.772 23:46:04 keyring_linux -- nvmf/common.sh@705 -- # python - 00:41:55.772 23:46:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:41:55.772 23:46:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:41:55.772 /tmp/:spdk-test:key0 00:41:55.772 23:46:04 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:41:55.772 23:46:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:41:55.772 23:46:04 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:41:55.772 23:46:04 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:55.772 23:46:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:41:55.772 23:46:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:41:55.772 23:46:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:55.772 23:46:04 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:55.772 23:46:04 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:41:55.772 23:46:04 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:55.772 23:46:04 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:41:55.772 23:46:04 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:41:55.772 23:46:04 keyring_linux -- nvmf/common.sh@705 -- # python - 00:41:55.772 23:46:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:41:55.772 23:46:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:41:55.772 /tmp/:spdk-test:key1 00:41:55.772 23:46:04 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2718521 00:41:55.772 23:46:04 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2718521 00:41:55.772 23:46:04 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:41:55.772 23:46:04 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2718521 ']' 00:41:55.772 23:46:04 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:55.772 23:46:04 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:55.772 23:46:04 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:55.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:55.772 23:46:04 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:55.772 23:46:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:55.772 [2024-07-10 23:46:04.749899] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:41:55.772 [2024-07-10 23:46:04.750009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2718521 ] 00:41:55.772 EAL: No free 2048 kB hugepages reported on node 1 00:41:56.030 [2024-07-10 23:46:04.856566] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:56.030 [2024-07-10 23:46:05.066950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:56.968 23:46:05 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:56.968 23:46:05 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:41:56.968 23:46:05 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:41:56.968 23:46:05 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:56.968 23:46:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:56.968 [2024-07-10 23:46:05.975418] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:56.968 null0 00:41:56.968 [2024-07-10 23:46:06.007446] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:56.968 [2024-07-10 23:46:06.007802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:56.968 23:46:06 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:56.968 23:46:06 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:41:56.968 570083071 00:41:56.968 23:46:06 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:41:56.968 286491756 00:41:57.227 23:46:06 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2718755 00:41:57.227 23:46:06 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2718755 /var/tmp/bperf.sock 00:41:57.227 23:46:06 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:41:57.227 23:46:06 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2718755 ']' 00:41:57.227 23:46:06 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:57.227 23:46:06 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:57.227 23:46:06 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:57.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:57.227 23:46:06 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:57.227 23:46:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:57.227 [2024-07-10 23:46:06.105546] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:41:57.227 [2024-07-10 23:46:06.105635] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2718755 ] 00:41:57.227 EAL: No free 2048 kB hugepages reported on node 1 00:41:57.227 [2024-07-10 23:46:06.208429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:57.486 [2024-07-10 23:46:06.429290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:58.054 23:46:06 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:58.054 23:46:06 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:41:58.054 23:46:06 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:41:58.054 23:46:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:41:58.054 23:46:07 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:41:58.054 23:46:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:41:58.622 23:46:07 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:41:58.622 23:46:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:41:58.881 [2024-07-10 23:46:07.756730] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:58.881 nvme0n1 00:41:58.881 23:46:07 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:41:58.881 23:46:07 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:41:58.881 23:46:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:41:58.881 23:46:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:41:58.881 23:46:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:41:58.881 23:46:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:59.140 23:46:08 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:41:59.140 23:46:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:41:59.140 23:46:08 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:41:59.140 23:46:08 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:41:59.140 23:46:08 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:59.140 23:46:08 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:41:59.140 23:46:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:59.399 23:46:08 keyring_linux -- keyring/linux.sh@25 -- # sn=570083071 00:41:59.399 23:46:08 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:41:59.399 23:46:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:41:59.399 23:46:08 keyring_linux -- keyring/linux.sh@26 -- # [[ 570083071 == \5\7\0\0\8\3\0\7\1 ]] 00:41:59.399 23:46:08 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 570083071 00:41:59.399 23:46:08 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:41:59.399 23:46:08 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:59.399 Running I/O for 1 seconds... 00:42:00.337 00:42:00.337 Latency(us) 00:42:00.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:00.337 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:00.337 nvme0n1 : 1.01 13037.34 50.93 0.00 0.00 9764.93 8320.22 18350.08 00:42:00.337 =================================================================================================================== 00:42:00.337 Total : 13037.34 50.93 0.00 0.00 9764.93 8320.22 18350.08 00:42:00.337 0 00:42:00.337 23:46:09 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:00.337 23:46:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:00.596 23:46:09 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:00.596 23:46:09 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:00.596 23:46:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:00.596 23:46:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:00.596 23:46:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:00.596 23:46:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:00.877 23:46:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:00.877 [2024-07-10 23:46:09.843122] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:00.877 [2024-07-10 23:46:09.843368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000331d80 (107): Transport endpoint is not connected 00:42:00.877 [2024-07-10 23:46:09.844349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000331d80 (9): Bad file descriptor 00:42:00.877 [2024-07-10 23:46:09.845346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:00.877 [2024-07-10 23:46:09.845373] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:00.877 [2024-07-10 23:46:09.845385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:00.877 request: 00:42:00.877 { 00:42:00.877 "name": "nvme0", 00:42:00.877 "trtype": "tcp", 00:42:00.877 "traddr": "127.0.0.1", 00:42:00.877 "adrfam": "ipv4", 00:42:00.877 "trsvcid": "4420", 00:42:00.877 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:00.877 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:00.877 "prchk_reftag": false, 00:42:00.877 "prchk_guard": false, 00:42:00.877 "hdgst": false, 00:42:00.877 "ddgst": false, 00:42:00.877 "psk": ":spdk-test:key1", 00:42:00.877 "method": "bdev_nvme_attach_controller", 00:42:00.877 "req_id": 1 00:42:00.877 } 00:42:00.877 Got JSON-RPC error response 00:42:00.877 response: 00:42:00.877 { 00:42:00.877 "code": -5, 00:42:00.877 "message": "Input/output error" 00:42:00.877 } 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@33 -- # sn=570083071 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 570083071 00:42:00.877 1 links removed 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@33 -- # sn=286491756 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 286491756 00:42:00.877 1 links removed 00:42:00.877 23:46:09 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2718755 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2718755 ']' 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2718755 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2718755 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2718755' 00:42:00.877 killing process with pid 2718755 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@967 -- # kill 2718755 00:42:00.877 Received shutdown signal, test time was about 1.000000 seconds 00:42:00.877 00:42:00.877 Latency(us) 00:42:00.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:00.877 =================================================================================================================== 00:42:00.877 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:00.877 23:46:09 keyring_linux -- common/autotest_common.sh@972 -- # wait 2718755 00:42:02.265 23:46:10 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2718521 00:42:02.265 23:46:10 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2718521 ']' 00:42:02.265 23:46:10 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2718521 00:42:02.265 23:46:10 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:42:02.265 23:46:10 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:02.265 23:46:10 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2718521 00:42:02.265 23:46:11 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:02.265 23:46:11 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:02.265 23:46:11 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2718521' 00:42:02.265 killing process with pid 2718521 00:42:02.265 23:46:11 keyring_linux -- common/autotest_common.sh@967 -- # kill 2718521 00:42:02.265 23:46:11 keyring_linux -- common/autotest_common.sh@972 -- # wait 2718521 00:42:04.799 00:42:04.799 real 0m8.965s 00:42:04.799 user 0m14.054s 00:42:04.799 sys 0m1.661s 00:42:04.799 23:46:13 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:04.799 23:46:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:04.799 ************************************ 00:42:04.799 END TEST keyring_linux 00:42:04.799 ************************************ 00:42:04.799 23:46:13 -- common/autotest_common.sh@1142 -- # return 0 00:42:04.799 23:46:13 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:42:04.799 23:46:13 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:42:04.799 23:46:13 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:42:04.799 23:46:13 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:42:04.799 23:46:13 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:42:04.799 23:46:13 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:42:04.799 23:46:13 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:42:04.799 23:46:13 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:42:04.799 23:46:13 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:42:04.799 23:46:13 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:42:04.799 23:46:13 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:42:04.799 23:46:13 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:42:04.799 23:46:13 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:42:04.799 23:46:13 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:42:04.799 23:46:13 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:42:04.799 23:46:13 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:42:04.799 23:46:13 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:42:04.799 23:46:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:42:04.799 23:46:13 -- common/autotest_common.sh@10 -- # set +x 00:42:04.799 23:46:13 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:42:04.799 23:46:13 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:42:04.799 23:46:13 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:42:04.799 23:46:13 -- common/autotest_common.sh@10 -- # set +x 00:42:08.993 INFO: APP EXITING 00:42:08.993 INFO: killing all VMs 00:42:08.993 INFO: killing vhost app 00:42:08.993 INFO: EXIT DONE 00:42:11.531 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:42:11.531 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:42:11.531 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:42:11.531 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:42:11.531 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:42:11.531 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:42:11.531 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:42:11.531 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:42:11.531 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:42:11.531 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:42:11.531 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:42:11.531 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:42:11.531 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:42:11.531 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:42:11.531 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:42:11.531 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:42:11.531 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:42:14.078 Cleaning 00:42:14.078 Removing: /var/run/dpdk/spdk0/config 00:42:14.078 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:14.078 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:14.078 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:14.078 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:14.078 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:14.078 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:14.078 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:14.078 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:14.078 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:14.078 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:14.078 Removing: /var/run/dpdk/spdk1/config 00:42:14.078 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:14.078 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:14.078 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:14.078 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:14.078 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:14.078 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:14.079 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:14.079 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:14.079 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:14.079 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:14.079 Removing: /var/run/dpdk/spdk1/mp_socket 00:42:14.079 Removing: /var/run/dpdk/spdk2/config 00:42:14.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:14.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:14.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:14.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:14.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:14.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:14.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:14.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:14.079 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:14.079 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:14.079 Removing: /var/run/dpdk/spdk3/config 00:42:14.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:14.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:14.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:14.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:14.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:14.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:14.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:14.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:14.079 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:14.079 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:14.079 Removing: /var/run/dpdk/spdk4/config 00:42:14.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:14.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:14.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:14.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:14.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:14.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:14.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:14.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:14.079 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:14.079 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:14.079 Removing: /dev/shm/bdev_svc_trace.1 00:42:14.079 Removing: /dev/shm/nvmf_trace.0 00:42:14.079 Removing: /dev/shm/spdk_tgt_trace.pid2218252 00:42:14.079 Removing: /var/run/dpdk/spdk0 00:42:14.079 Removing: /var/run/dpdk/spdk1 00:42:14.079 Removing: /var/run/dpdk/spdk2 00:42:14.079 Removing: /var/run/dpdk/spdk3 00:42:14.079 Removing: /var/run/dpdk/spdk4 00:42:14.079 Removing: /var/run/dpdk/spdk_pid2213632 00:42:14.079 Removing: /var/run/dpdk/spdk_pid2215281 00:42:14.079 Removing: /var/run/dpdk/spdk_pid2218252 00:42:14.079 Removing: /var/run/dpdk/spdk_pid2219338 00:42:14.079 Removing: /var/run/dpdk/spdk_pid2220523 00:42:14.079 Removing: /var/run/dpdk/spdk_pid2221228 00:42:14.079 Removing: /var/run/dpdk/spdk_pid2222654 00:42:14.079 Removing: /var/run/dpdk/spdk_pid2222887 00:42:14.079 Removing: /var/run/dpdk/spdk_pid2223480 00:42:14.079 Removing: /var/run/dpdk/spdk_pid2225268 00:42:14.079 Removing: /var/run/dpdk/spdk_pid2226724 00:42:14.079 Removing: /var/run/dpdk/spdk_pid2227621 00:42:14.079 Removing: /var/run/dpdk/spdk_pid2228376 00:42:14.079 Removing: /var/run/dpdk/spdk_pid2229188 00:42:14.079 Removing: /var/run/dpdk/spdk_pid2229934 00:42:14.079 Removing: /var/run/dpdk/spdk_pid2230200 00:42:14.079 Removing: /var/run/dpdk/spdk_pid2230460 00:42:14.337 Removing: /var/run/dpdk/spdk_pid2230956 00:42:14.337 Removing: /var/run/dpdk/spdk_pid2231935 00:42:14.337 Removing: /var/run/dpdk/spdk_pid2235158 00:42:14.337 Removing: /var/run/dpdk/spdk_pid2235879 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2236599 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2236831 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2238703 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2238939 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2240755 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2240933 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2241542 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2241774 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2242274 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2242511 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2244075 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2244458 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2244760 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2245489 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2245738 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2246251 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2246680 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2247077 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2247547 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2247952 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2248487 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2249027 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2249508 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2250229 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2250853 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2251332 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2251807 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2252287 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2252722 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2253119 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2253533 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2254004 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2254484 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2254969 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2255442 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2255925 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2256437 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2257203 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2261307 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2344578 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2349135 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2359264 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2364658 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2368875 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2369403 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2376308 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2385753 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2386216 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2390807 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2396803 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2399613 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2410481 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2419714 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2422133 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2423305 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2440740 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2444931 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2470909 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2475634 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2477247 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2479300 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2479760 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2480006 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2480465 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2481430 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2483393 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2484936 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2485784 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2488411 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2489400 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2490365 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2494853 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2501092 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2506206 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2543501 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2547755 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2554045 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2556170 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2558530 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2563391 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2567678 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2575458 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2575461 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2580275 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2580465 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2580867 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2581398 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2581451 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2583245 00:42:14.338 Removing: /var/run/dpdk/spdk_pid2584860 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2586583 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2588254 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2589861 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2591459 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2597527 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2598090 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2599934 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2601083 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2607013 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2609822 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2615610 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2621476 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2630252 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2637452 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2637456 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2655508 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2656220 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2657143 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2657855 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2659270 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2659977 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2660775 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2661597 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2666595 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2667057 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2673342 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2673619 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2676063 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2683804 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2683820 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2689045 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2691126 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2693195 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2694496 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2696658 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2698102 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2707412 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2707871 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2708547 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2711163 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2711704 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2712167 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2715754 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2715987 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2717665 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2718521 00:42:14.603 Removing: /var/run/dpdk/spdk_pid2718755 00:42:14.603 Clean 00:42:14.603 23:46:23 -- common/autotest_common.sh@1451 -- # return 0 00:42:14.603 23:46:23 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:42:14.603 23:46:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:42:14.603 23:46:23 -- common/autotest_common.sh@10 -- # set +x 00:42:14.603 23:46:23 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:42:14.603 23:46:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:42:14.604 23:46:23 -- common/autotest_common.sh@10 -- # set +x 00:42:14.604 23:46:23 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:14.604 23:46:23 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:14.866 23:46:23 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:14.866 23:46:23 -- spdk/autotest.sh@391 -- # hash lcov 00:42:14.866 23:46:23 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:42:14.866 23:46:23 -- spdk/autotest.sh@393 -- # hostname 00:42:14.866 23:46:23 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:14.866 geninfo: WARNING: invalid characters removed from testname! 00:42:36.800 23:46:42 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:36.800 23:46:45 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:38.177 23:46:47 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:40.083 23:46:48 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:41.988 23:46:50 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:43.427 23:46:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:45.335 23:46:54 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:42:45.336 23:46:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:45.336 23:46:54 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:42:45.336 23:46:54 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:45.336 23:46:54 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:45.336 23:46:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:45.336 23:46:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:45.336 23:46:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:45.336 23:46:54 -- paths/export.sh@5 -- $ export PATH 00:42:45.336 23:46:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:45.336 23:46:54 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:42:45.336 23:46:54 -- common/autobuild_common.sh@444 -- $ date +%s 00:42:45.336 23:46:54 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720648014.XXXXXX 00:42:45.336 23:46:54 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720648014.CzRjai 00:42:45.336 23:46:54 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:42:45.336 23:46:54 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:42:45.336 23:46:54 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:42:45.336 23:46:54 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:42:45.336 23:46:54 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:42:45.336 23:46:54 -- common/autobuild_common.sh@460 -- $ get_config_params 00:42:45.336 23:46:54 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:42:45.336 23:46:54 -- common/autotest_common.sh@10 -- $ set +x 00:42:45.336 23:46:54 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:42:45.336 23:46:54 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:42:45.336 23:46:54 -- pm/common@17 -- $ local monitor 00:42:45.336 23:46:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:45.336 23:46:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:45.336 23:46:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:45.336 23:46:54 -- pm/common@21 -- $ date +%s 00:42:45.336 23:46:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:45.336 23:46:54 -- pm/common@21 -- $ date +%s 00:42:45.336 23:46:54 -- pm/common@25 -- $ sleep 1 00:42:45.336 23:46:54 -- pm/common@21 -- $ date +%s 00:42:45.336 23:46:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720648014 00:42:45.336 23:46:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720648014 00:42:45.336 23:46:54 -- pm/common@21 -- $ date +%s 00:42:45.336 23:46:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720648014 00:42:45.336 23:46:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720648014 00:42:45.336 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720648014_collect-vmstat.pm.log 00:42:45.336 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720648014_collect-cpu-load.pm.log 00:42:45.336 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720648014_collect-cpu-temp.pm.log 00:42:45.336 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720648014_collect-bmc-pm.bmc.pm.log 00:42:46.273 23:46:55 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:42:46.273 23:46:55 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:42:46.273 23:46:55 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:46.273 23:46:55 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:42:46.273 23:46:55 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:42:46.273 23:46:55 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:42:46.273 23:46:55 -- spdk/autopackage.sh@19 -- $ timing_finish 00:42:46.273 23:46:55 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:42:46.273 23:46:55 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:42:46.273 23:46:55 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:46.273 23:46:55 -- spdk/autopackage.sh@20 -- $ exit 0 00:42:46.273 23:46:55 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:42:46.273 23:46:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:42:46.273 23:46:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:42:46.273 23:46:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:46.273 23:46:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:42:46.273 23:46:55 -- pm/common@44 -- $ pid=2730335 00:42:46.273 23:46:55 -- pm/common@50 -- $ kill -TERM 2730335 00:42:46.273 23:46:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:46.273 23:46:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:42:46.273 23:46:55 -- pm/common@44 -- $ pid=2730336 00:42:46.273 23:46:55 -- pm/common@50 -- $ kill -TERM 2730336 00:42:46.273 23:46:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:46.273 23:46:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:42:46.273 23:46:55 -- pm/common@44 -- $ pid=2730338 00:42:46.273 23:46:55 -- pm/common@50 -- $ kill -TERM 2730338 00:42:46.273 23:46:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:46.273 23:46:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:42:46.273 23:46:55 -- pm/common@44 -- $ pid=2730367 00:42:46.273 23:46:55 -- pm/common@50 -- $ sudo -E kill -TERM 2730367 00:42:46.273 + [[ -n 2109998 ]] 00:42:46.273 + sudo kill 2109998 00:42:46.283 [Pipeline] } 00:42:46.302 [Pipeline] // stage 00:42:46.308 [Pipeline] } 00:42:46.320 [Pipeline] // timeout 00:42:46.326 [Pipeline] } 00:42:46.338 [Pipeline] // catchError 00:42:46.343 [Pipeline] } 00:42:46.358 [Pipeline] // wrap 00:42:46.363 [Pipeline] } 00:42:46.376 [Pipeline] // catchError 00:42:46.384 [Pipeline] stage 00:42:46.385 [Pipeline] { (Epilogue) 00:42:46.397 [Pipeline] catchError 00:42:46.398 [Pipeline] { 00:42:46.409 [Pipeline] echo 00:42:46.410 Cleanup processes 00:42:46.414 [Pipeline] sh 00:42:46.697 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:46.698 2730475 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:42:46.698 2730737 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:46.711 [Pipeline] sh 00:42:46.995 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:46.995 ++ grep -v 'sudo pgrep' 00:42:46.995 ++ awk '{print $1}' 00:42:46.995 + sudo kill -9 2730475 00:42:47.007 [Pipeline] sh 00:42:47.289 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:42:57.283 [Pipeline] sh 00:42:57.568 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:42:57.568 Artifacts sizes are good 00:42:57.584 [Pipeline] archiveArtifacts 00:42:57.591 Archiving artifacts 00:42:57.803 [Pipeline] sh 00:42:58.087 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:42:58.104 [Pipeline] cleanWs 00:42:58.115 [WS-CLEANUP] Deleting project workspace... 00:42:58.115 [WS-CLEANUP] Deferred wipeout is used... 00:42:58.122 [WS-CLEANUP] done 00:42:58.124 [Pipeline] } 00:42:58.152 [Pipeline] // catchError 00:42:58.170 [Pipeline] sh 00:42:58.458 + logger -p user.info -t JENKINS-CI 00:42:58.467 [Pipeline] } 00:42:58.485 [Pipeline] // stage 00:42:58.490 [Pipeline] } 00:42:58.506 [Pipeline] // node 00:42:58.513 [Pipeline] End of Pipeline 00:42:58.547 Finished: SUCCESS